00:00:00.001 Started by upstream project "autotest-per-patch" build number 122880 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.122 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.122 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.564 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.575 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.587 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:02.587 > git config core.sparsecheckout # timeout=10 00:00:02.599 > git read-tree -mu HEAD # timeout=10 00:00:02.614 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:02.632 Commit message: "inventory/dev: add missing long names" 00:00:02.632 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:02.711 [Pipeline] Start of Pipeline 00:00:02.727 [Pipeline] library 00:00:02.728 Loading library shm_lib@master 00:00:02.729 Library shm_lib@master is cached. Copying from home. 00:00:02.745 [Pipeline] node 00:00:02.768 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.769 [Pipeline] { 00:00:02.781 [Pipeline] catchError 00:00:02.782 [Pipeline] { 00:00:02.798 [Pipeline] wrap 00:00:02.809 [Pipeline] { 00:00:02.818 [Pipeline] stage 00:00:02.820 [Pipeline] { (Prologue) 00:00:03.007 [Pipeline] sh 00:00:03.285 + logger -p user.info -t JENKINS-CI 00:00:03.302 [Pipeline] echo 00:00:03.303 Node: WFP20 00:00:03.311 [Pipeline] sh 00:00:03.609 [Pipeline] setCustomBuildProperty 00:00:03.622 [Pipeline] echo 00:00:03.624 Cleanup processes 00:00:03.630 [Pipeline] sh 00:00:03.913 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.913 1269250 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.927 [Pipeline] sh 00:00:04.209 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.209 ++ grep -v 'sudo pgrep' 00:00:04.209 ++ awk '{print $1}' 00:00:04.209 + sudo kill -9 00:00:04.209 + true 00:00:04.222 [Pipeline] cleanWs 00:00:04.230 [WS-CLEANUP] Deleting project workspace... 00:00:04.230 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.236 [WS-CLEANUP] done 00:00:04.239 [Pipeline] setCustomBuildProperty 00:00:04.249 [Pipeline] sh 00:00:04.524 + sudo git config --global --replace-all safe.directory '*' 00:00:04.592 [Pipeline] nodesByLabel 00:00:04.594 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.601 [Pipeline] httpRequest 00:00:04.605 HttpMethod: GET 00:00:04.605 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.608 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.612 Response Code: HTTP/1.1 200 OK 00:00:04.612 Success: Status code 200 is in the accepted range: 200,404 00:00:04.613 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:04.750 [Pipeline] sh 00:00:05.029 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:05.050 [Pipeline] httpRequest 00:00:05.054 HttpMethod: GET 00:00:05.054 URL: http://10.211.164.101/packages/spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:05.057 Sending request to url: http://10.211.164.101/packages/spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:05.061 Response Code: HTTP/1.1 200 OK 00:00:05.095 Success: Status code 200 is in the accepted range: 200,404 00:00:05.096 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:20.216 [Pipeline] sh 00:00:20.495 + tar --no-same-owner -xf spdk_01f10b8a3bf61d59422d8d60472346d8199e8eee.tar.gz 00:00:23.037 [Pipeline] sh 00:00:23.318 + git -C spdk log --oneline -n5 00:00:23.318 01f10b8a3 raid: fix race between starting rebuild and creating io channel 00:00:23.318 4506c0c36 test/common: Enable inherit_errexit 00:00:23.318 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:23.318 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:23.318 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:23.329 [Pipeline] } 00:00:23.347 [Pipeline] // stage 00:00:23.357 [Pipeline] stage 00:00:23.359 [Pipeline] { (Prepare) 00:00:23.403 [Pipeline] writeFile 00:00:23.423 [Pipeline] sh 00:00:23.705 + logger -p user.info -t JENKINS-CI 00:00:23.716 [Pipeline] sh 00:00:23.995 + logger -p user.info -t JENKINS-CI 00:00:24.007 [Pipeline] sh 00:00:24.283 + cat autorun-spdk.conf 00:00:24.283 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.283 SPDK_TEST_FUZZER_SHORT=1 00:00:24.283 SPDK_TEST_FUZZER=1 00:00:24.283 SPDK_RUN_UBSAN=1 00:00:24.291 RUN_NIGHTLY=0 00:00:24.296 [Pipeline] readFile 00:00:24.320 [Pipeline] withEnv 00:00:24.322 [Pipeline] { 00:00:24.335 [Pipeline] sh 00:00:24.614 + set -ex 00:00:24.614 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:24.614 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:24.614 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.614 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:24.614 ++ SPDK_TEST_FUZZER=1 00:00:24.614 ++ SPDK_RUN_UBSAN=1 00:00:24.614 ++ RUN_NIGHTLY=0 00:00:24.614 + case $SPDK_TEST_NVMF_NICS in 00:00:24.614 + DRIVERS= 00:00:24.614 + [[ -n '' ]] 00:00:24.614 + exit 0 00:00:24.623 [Pipeline] } 00:00:24.643 [Pipeline] // withEnv 00:00:24.649 [Pipeline] } 00:00:24.665 [Pipeline] // stage 00:00:24.674 [Pipeline] catchError 00:00:24.676 [Pipeline] { 00:00:24.690 [Pipeline] timeout 00:00:24.690 Timeout set to expire in 30 min 00:00:24.692 [Pipeline] { 00:00:24.708 [Pipeline] stage 00:00:24.710 [Pipeline] { (Tests) 00:00:24.724 [Pipeline] sh 00:00:25.003 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.003 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.003 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.003 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:25.003 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:25.003 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:25.003 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:25.003 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:25.003 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:25.003 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:25.003 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:25.003 + source /etc/os-release 00:00:25.003 ++ NAME='Fedora Linux' 00:00:25.003 ++ VERSION='38 (Cloud Edition)' 00:00:25.003 ++ ID=fedora 00:00:25.003 ++ VERSION_ID=38 00:00:25.003 ++ VERSION_CODENAME= 00:00:25.003 ++ PLATFORM_ID=platform:f38 00:00:25.003 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:25.003 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:25.003 ++ LOGO=fedora-logo-icon 00:00:25.003 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:25.003 ++ HOME_URL=https://fedoraproject.org/ 00:00:25.003 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:25.003 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:25.003 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:25.003 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:25.003 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:25.003 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:25.003 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:25.003 ++ SUPPORT_END=2024-05-14 00:00:25.003 ++ VARIANT='Cloud Edition' 00:00:25.003 ++ VARIANT_ID=cloud 00:00:25.003 + uname -a 00:00:25.003 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:25.003 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:28.288 Hugepages 00:00:28.288 node hugesize free / total 00:00:28.288 node0 1048576kB 0 / 0 00:00:28.288 node0 2048kB 0 / 0 00:00:28.288 node1 1048576kB 0 / 0 00:00:28.288 node1 2048kB 0 / 0 00:00:28.288 00:00:28.288 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:28.288 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:28.288 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:28.288 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:28.288 + rm -f /tmp/spdk-ld-path 00:00:28.288 + source autorun-spdk.conf 00:00:28.288 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.288 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:28.288 ++ SPDK_TEST_FUZZER=1 00:00:28.288 ++ SPDK_RUN_UBSAN=1 00:00:28.288 ++ RUN_NIGHTLY=0 00:00:28.288 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:28.288 + [[ -n '' ]] 00:00:28.288 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:28.288 + for M in /var/spdk/build-*-manifest.txt 00:00:28.288 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:28.288 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:28.288 + for M in /var/spdk/build-*-manifest.txt 00:00:28.288 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:28.288 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:28.288 ++ uname 00:00:28.288 + [[ Linux == \L\i\n\u\x ]] 00:00:28.288 + sudo dmesg -T 00:00:28.288 + sudo dmesg --clear 00:00:28.288 + dmesg_pid=1270138 00:00:28.288 + [[ Fedora Linux == FreeBSD ]] 00:00:28.288 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.288 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.288 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:28.288 + [[ -x /usr/src/fio-static/fio ]] 00:00:28.288 + export FIO_BIN=/usr/src/fio-static/fio 00:00:28.288 + FIO_BIN=/usr/src/fio-static/fio 00:00:28.288 + sudo dmesg -Tw 00:00:28.288 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:28.288 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:28.288 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:28.288 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.288 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.288 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:28.288 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.288 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.288 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:28.289 Test configuration: 00:00:28.289 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.289 SPDK_TEST_FUZZER_SHORT=1 00:00:28.289 SPDK_TEST_FUZZER=1 00:00:28.289 SPDK_RUN_UBSAN=1 00:00:28.289 RUN_NIGHTLY=0 10:52:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:28.289 10:52:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:28.289 10:52:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:28.289 10:52:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:28.289 10:52:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.289 10:52:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.289 10:52:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.289 10:52:25 -- paths/export.sh@5 -- $ export PATH 00:00:28.289 10:52:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.289 10:52:25 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:28.289 10:52:25 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:28.289 10:52:25 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715763145.XXXXXX 00:00:28.289 10:52:25 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715763145.xfmN0L 00:00:28.289 10:52:25 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:28.289 10:52:25 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:28.289 10:52:25 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:28.289 10:52:25 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:28.289 10:52:25 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:28.289 10:52:25 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:28.289 10:52:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:28.289 10:52:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:28.289 10:52:25 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:28.289 10:52:25 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:28.289 10:52:25 -- pm/common@17 -- $ local monitor 00:00:28.289 10:52:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.289 10:52:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.289 10:52:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.289 10:52:25 -- pm/common@21 -- $ date +%s 00:00:28.289 10:52:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.289 10:52:25 -- pm/common@21 -- $ date +%s 00:00:28.289 10:52:25 -- pm/common@25 -- $ sleep 1 00:00:28.289 10:52:25 -- pm/common@21 -- $ date +%s 00:00:28.289 10:52:25 -- pm/common@21 -- $ date +%s 00:00:28.289 10:52:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763145 00:00:28.289 10:52:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763145 00:00:28.289 10:52:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763145 00:00:28.289 10:52:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715763145 00:00:28.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763145_collect-cpu-temp.pm.log 00:00:28.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763145_collect-vmstat.pm.log 00:00:28.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763145_collect-cpu-load.pm.log 00:00:28.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715763145_collect-bmc-pm.bmc.pm.log 00:00:29.485 10:52:26 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:29.485 10:52:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:29.485 10:52:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:29.485 10:52:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.485 10:52:26 -- spdk/autobuild.sh@16 -- $ date -u 00:00:29.485 Wed May 15 08:52:26 AM UTC 2024 00:00:29.485 10:52:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:29.485 v24.05-pre-659-g01f10b8a3 00:00:29.485 10:52:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:29.485 10:52:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:29.485 10:52:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:29.485 10:52:26 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:29.485 10:52:26 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:29.485 10:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.485 ************************************ 00:00:29.485 START TEST ubsan 00:00:29.485 ************************************ 00:00:29.485 10:52:26 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:29.485 using ubsan 00:00:29.485 00:00:29.485 real 0m0.001s 00:00:29.485 user 0m0.000s 00:00:29.485 sys 0m0.001s 00:00:29.485 10:52:26 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:29.485 10:52:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:29.485 ************************************ 00:00:29.485 END TEST ubsan 00:00:29.485 ************************************ 00:00:29.485 10:52:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:29.485 10:52:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:29.485 10:52:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:29.485 10:52:26 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:29.485 10:52:26 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:29.485 10:52:26 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:29.485 10:52:26 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:00:29.485 10:52:26 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:29.485 10:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.485 ************************************ 00:00:29.485 START TEST autobuild_llvm_precompile 00:00:29.485 ************************************ 00:00:29.485 10:52:26 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ _llvm_precompile 00:00:29.485 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:29.744 Target: x86_64-redhat-linux-gnu 00:00:29.744 Thread model: posix 00:00:29.744 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:29.744 10:52:26 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:30.003 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:30.003 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:30.262 Using 'verbs' RDMA provider 00:00:46.141 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:01.026 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:01.026 Creating mk/config.mk...done. 00:01:01.026 Creating mk/cc.flags.mk...done. 00:01:01.026 Type 'make' to build. 00:01:01.026 00:01:01.026 real 0m29.792s 00:01:01.026 user 0m12.746s 00:01:01.026 sys 0m16.441s 00:01:01.026 10:52:56 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:01.026 10:52:56 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:01.026 ************************************ 00:01:01.026 END TEST autobuild_llvm_precompile 00:01:01.026 ************************************ 00:01:01.026 10:52:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.026 10:52:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.026 10:52:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.026 10:52:56 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:01.027 10:52:56 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:01.027 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:01.027 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:01.027 Using 'verbs' RDMA provider 00:01:13.251 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:25.559 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:25.559 Creating mk/config.mk...done. 00:01:25.559 Creating mk/cc.flags.mk...done. 00:01:25.559 Type 'make' to build. 00:01:25.559 10:53:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:25.559 10:53:21 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:25.559 10:53:21 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:25.559 10:53:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.559 ************************************ 00:01:25.559 START TEST make 00:01:25.559 ************************************ 00:01:25.559 10:53:21 make -- common/autotest_common.sh@1122 -- $ make -j112 00:01:25.559 make[1]: Nothing to be done for 'all'. 00:01:26.491 The Meson build system 00:01:26.491 Version: 1.3.1 00:01:26.491 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:26.491 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.491 Build type: native build 00:01:26.491 Project name: libvfio-user 00:01:26.491 Project version: 0.0.1 00:01:26.491 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:26.491 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:26.491 Host machine cpu family: x86_64 00:01:26.491 Host machine cpu: x86_64 00:01:26.491 Run-time dependency threads found: YES 00:01:26.491 Library dl found: YES 00:01:26.491 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.491 Run-time dependency json-c found: YES 0.17 00:01:26.491 Run-time dependency cmocka found: YES 1.1.7 00:01:26.491 Program pytest-3 found: NO 00:01:26.491 Program flake8 found: NO 00:01:26.491 Program misspell-fixer found: NO 00:01:26.491 Program restructuredtext-lint found: NO 00:01:26.491 Program valgrind found: YES (/usr/bin/valgrind) 00:01:26.491 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.491 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.491 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.491 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:26.491 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:26.491 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:26.491 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:26.491 Build targets in project: 8 00:01:26.491 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:26.491 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:26.491 00:01:26.491 libvfio-user 0.0.1 00:01:26.491 00:01:26.491 User defined options 00:01:26.491 buildtype : debug 00:01:26.491 default_library: static 00:01:26.491 libdir : /usr/local/lib 00:01:26.491 00:01:26.491 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.749 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.006 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:27.006 [2/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:27.006 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:27.006 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:27.006 [5/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:27.006 [6/36] Compiling C object samples/null.p/null.c.o 00:01:27.006 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:27.006 [8/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:27.006 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:27.006 [10/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:27.006 [11/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:27.006 [12/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:27.006 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:27.006 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:27.006 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:27.006 [16/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:27.006 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:27.006 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:27.006 [19/36] Compiling C object samples/server.p/server.c.o 00:01:27.006 [20/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:27.006 [21/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:27.006 [22/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:27.006 [23/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:27.006 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:27.006 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:27.006 [26/36] Compiling C object samples/client.p/client.c.o 00:01:27.006 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:27.006 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:27.006 [29/36] Linking static target lib/libvfio-user.a 00:01:27.006 [30/36] Linking target samples/client 00:01:27.006 [31/36] Linking target test/unit_tests 00:01:27.006 [32/36] Linking target samples/server 00:01:27.006 [33/36] Linking target samples/null 00:01:27.006 [34/36] Linking target samples/lspci 00:01:27.006 [35/36] Linking target samples/gpio-pci-idio-16 00:01:27.006 [36/36] Linking target samples/shadow_ioeventfd_server 00:01:27.006 INFO: autodetecting backend as ninja 00:01:27.006 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.006 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.572 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.572 ninja: no work to do. 00:01:32.843 The Meson build system 00:01:32.843 Version: 1.3.1 00:01:32.843 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:32.843 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:32.843 Build type: native build 00:01:32.843 Program cat found: YES (/usr/bin/cat) 00:01:32.843 Project name: DPDK 00:01:32.843 Project version: 23.11.0 00:01:32.843 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:32.843 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:32.844 Host machine cpu family: x86_64 00:01:32.844 Host machine cpu: x86_64 00:01:32.844 Message: ## Building in Developer Mode ## 00:01:32.844 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.844 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:32.844 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.844 Program python3 found: YES (/usr/bin/python3) 00:01:32.844 Program cat found: YES (/usr/bin/cat) 00:01:32.844 Compiler for C supports arguments -march=native: YES 00:01:32.844 Checking for size of "void *" : 8 00:01:32.844 Checking for size of "void *" : 8 (cached) 00:01:32.844 Library m found: YES 00:01:32.844 Library numa found: YES 00:01:32.844 Has header "numaif.h" : YES 00:01:32.844 Library fdt found: NO 00:01:32.844 Library execinfo found: NO 00:01:32.844 Has header "execinfo.h" : YES 00:01:32.844 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.844 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.844 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.844 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.844 Run-time dependency openssl found: YES 3.0.9 00:01:32.844 Run-time dependency libpcap found: YES 1.10.4 00:01:32.844 Has header "pcap.h" with dependency libpcap: YES 00:01:32.844 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.844 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.844 Compiler for C supports arguments -Wformat: YES 00:01:32.844 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:32.844 Compiler for C supports arguments -Wformat-security: YES 00:01:32.844 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.844 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.844 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.844 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.844 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.844 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.844 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.844 Compiler for C supports arguments -Wundef: YES 00:01:32.844 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.844 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.844 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:32.844 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.844 Program objdump found: YES (/usr/bin/objdump) 00:01:32.844 Compiler for C supports arguments -mavx512f: YES 00:01:32.844 Checking if "AVX512 checking" compiles: YES 00:01:32.844 Fetching value of define "__SSE4_2__" : 1 00:01:32.844 Fetching value of define "__AES__" : 1 00:01:32.844 Fetching value of define "__AVX__" : 1 00:01:32.844 Fetching value of define "__AVX2__" : 1 00:01:32.844 Fetching value of define "__AVX512BW__" : 1 00:01:32.844 Fetching value of define "__AVX512CD__" : 1 00:01:32.844 Fetching value of define "__AVX512DQ__" : 1 00:01:32.844 Fetching value of define "__AVX512F__" : 1 00:01:32.844 Fetching value of define "__AVX512VL__" : 1 00:01:32.844 Fetching value of define "__PCLMUL__" : 1 00:01:32.844 Fetching value of define "__RDRND__" : 1 00:01:32.844 Fetching value of define "__RDSEED__" : 1 00:01:32.844 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.844 Fetching value of define "__znver1__" : (undefined) 00:01:32.844 Fetching value of define "__znver2__" : (undefined) 00:01:32.844 Fetching value of define "__znver3__" : (undefined) 00:01:32.844 Fetching value of define "__znver4__" : (undefined) 00:01:32.844 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:32.844 Message: lib/log: Defining dependency "log" 00:01:32.844 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.844 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.844 Checking for function "getentropy" : NO 00:01:32.844 Message: lib/eal: Defining dependency "eal" 00:01:32.844 Message: lib/ring: Defining dependency "ring" 00:01:32.844 Message: lib/rcu: Defining dependency "rcu" 00:01:32.844 Message: lib/mempool: Defining dependency "mempool" 00:01:32.844 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.844 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.844 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:32.844 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:32.844 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:32.844 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:32.844 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:32.844 Compiler for C supports arguments -mpclmul: YES 00:01:32.844 Compiler for C supports arguments -maes: YES 00:01:32.844 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.844 Compiler for C supports arguments -mavx512bw: YES 00:01:32.844 Compiler for C supports arguments -mavx512dq: YES 00:01:32.844 Compiler for C supports arguments -mavx512vl: YES 00:01:32.844 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.844 Compiler for C supports arguments -mavx2: YES 00:01:32.844 Compiler for C supports arguments -mavx: YES 00:01:32.844 Message: lib/net: Defining dependency "net" 00:01:32.844 Message: lib/meter: Defining dependency "meter" 00:01:32.844 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.844 Message: lib/pci: Defining dependency "pci" 00:01:32.844 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.844 Message: lib/hash: Defining dependency "hash" 00:01:32.844 Message: lib/timer: Defining dependency "timer" 00:01:32.844 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.844 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.844 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.844 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.844 Message: lib/power: Defining dependency "power" 00:01:32.844 Message: lib/reorder: Defining dependency "reorder" 00:01:32.844 Message: lib/security: Defining dependency "security" 00:01:32.844 Has header "linux/userfaultfd.h" : YES 00:01:32.844 Has header "linux/vduse.h" : YES 00:01:32.844 Message: lib/vhost: Defining dependency "vhost" 00:01:32.844 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:32.844 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:32.844 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:32.844 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:32.844 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:32.844 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:32.844 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:32.844 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:32.844 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:32.844 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:32.844 Program doxygen found: YES (/usr/bin/doxygen) 00:01:32.844 Configuring doxy-api-html.conf using configuration 00:01:32.844 Configuring doxy-api-man.conf using configuration 00:01:32.844 Program mandb found: YES (/usr/bin/mandb) 00:01:32.844 Program sphinx-build found: NO 00:01:32.844 Configuring rte_build_config.h using configuration 00:01:32.844 Message: 00:01:32.844 ================= 00:01:32.844 Applications Enabled 00:01:32.844 ================= 00:01:32.844 00:01:32.844 apps: 00:01:32.844 00:01:32.844 00:01:32.844 Message: 00:01:32.844 ================= 00:01:32.844 Libraries Enabled 00:01:32.844 ================= 00:01:32.844 00:01:32.844 libs: 00:01:32.844 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:32.844 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:32.844 cryptodev, dmadev, power, reorder, security, vhost, 00:01:32.844 00:01:32.844 Message: 00:01:32.844 =============== 00:01:32.844 Drivers Enabled 00:01:32.844 =============== 00:01:32.844 00:01:32.844 common: 00:01:32.844 00:01:32.844 bus: 00:01:32.844 pci, vdev, 00:01:32.844 mempool: 00:01:32.844 ring, 00:01:32.844 dma: 00:01:32.844 00:01:32.844 net: 00:01:32.844 00:01:32.844 crypto: 00:01:32.844 00:01:32.844 compress: 00:01:32.844 00:01:32.844 vdpa: 00:01:32.844 00:01:32.844 00:01:32.844 Message: 00:01:32.844 ================= 00:01:32.844 Content Skipped 00:01:32.844 ================= 00:01:32.844 00:01:32.844 apps: 00:01:32.844 dumpcap: explicitly disabled via build config 00:01:32.844 graph: explicitly disabled via build config 00:01:32.844 pdump: explicitly disabled via build config 00:01:32.844 proc-info: explicitly disabled via build config 00:01:32.844 test-acl: explicitly disabled via build config 00:01:32.844 test-bbdev: explicitly disabled via build config 00:01:32.844 test-cmdline: explicitly disabled via build config 00:01:32.844 test-compress-perf: explicitly disabled via build config 00:01:32.844 test-crypto-perf: explicitly disabled via build config 00:01:32.844 test-dma-perf: explicitly disabled via build config 00:01:32.844 test-eventdev: explicitly disabled via build config 00:01:32.844 test-fib: explicitly disabled via build config 00:01:32.844 test-flow-perf: explicitly disabled via build config 00:01:32.844 test-gpudev: explicitly disabled via build config 00:01:32.844 test-mldev: explicitly disabled via build config 00:01:32.844 test-pipeline: explicitly disabled via build config 00:01:32.844 test-pmd: explicitly disabled via build config 00:01:32.844 test-regex: explicitly disabled via build config 00:01:32.844 test-sad: explicitly disabled via build config 00:01:32.844 test-security-perf: explicitly disabled via build config 00:01:32.844 00:01:32.844 libs: 00:01:32.844 metrics: explicitly disabled via build config 00:01:32.844 acl: explicitly disabled via build config 00:01:32.844 bbdev: explicitly disabled via build config 00:01:32.844 bitratestats: explicitly disabled via build config 00:01:32.844 bpf: explicitly disabled via build config 00:01:32.844 cfgfile: explicitly disabled via build config 00:01:32.844 distributor: explicitly disabled via build config 00:01:32.844 efd: explicitly disabled via build config 00:01:32.844 eventdev: explicitly disabled via build config 00:01:32.844 dispatcher: explicitly disabled via build config 00:01:32.844 gpudev: explicitly disabled via build config 00:01:32.844 gro: explicitly disabled via build config 00:01:32.844 gso: explicitly disabled via build config 00:01:32.844 ip_frag: explicitly disabled via build config 00:01:32.844 jobstats: explicitly disabled via build config 00:01:32.844 latencystats: explicitly disabled via build config 00:01:32.845 lpm: explicitly disabled via build config 00:01:32.845 member: explicitly disabled via build config 00:01:32.845 pcapng: explicitly disabled via build config 00:01:32.845 rawdev: explicitly disabled via build config 00:01:32.845 regexdev: explicitly disabled via build config 00:01:32.845 mldev: explicitly disabled via build config 00:01:32.845 rib: explicitly disabled via build config 00:01:32.845 sched: explicitly disabled via build config 00:01:32.845 stack: explicitly disabled via build config 00:01:32.845 ipsec: explicitly disabled via build config 00:01:32.845 pdcp: explicitly disabled via build config 00:01:32.845 fib: explicitly disabled via build config 00:01:32.845 port: explicitly disabled via build config 00:01:32.845 pdump: explicitly disabled via build config 00:01:32.845 table: explicitly disabled via build config 00:01:32.845 pipeline: explicitly disabled via build config 00:01:32.845 graph: explicitly disabled via build config 00:01:32.845 node: explicitly disabled via build config 00:01:32.845 00:01:32.845 drivers: 00:01:32.845 common/cpt: not in enabled drivers build config 00:01:32.845 common/dpaax: not in enabled drivers build config 00:01:32.845 common/iavf: not in enabled drivers build config 00:01:32.845 common/idpf: not in enabled drivers build config 00:01:32.845 common/mvep: not in enabled drivers build config 00:01:32.845 common/octeontx: not in enabled drivers build config 00:01:32.845 bus/auxiliary: not in enabled drivers build config 00:01:32.845 bus/cdx: not in enabled drivers build config 00:01:32.845 bus/dpaa: not in enabled drivers build config 00:01:32.845 bus/fslmc: not in enabled drivers build config 00:01:32.845 bus/ifpga: not in enabled drivers build config 00:01:32.845 bus/platform: not in enabled drivers build config 00:01:32.845 bus/vmbus: not in enabled drivers build config 00:01:32.845 common/cnxk: not in enabled drivers build config 00:01:32.845 common/mlx5: not in enabled drivers build config 00:01:32.845 common/nfp: not in enabled drivers build config 00:01:32.845 common/qat: not in enabled drivers build config 00:01:32.845 common/sfc_efx: not in enabled drivers build config 00:01:32.845 mempool/bucket: not in enabled drivers build config 00:01:32.845 mempool/cnxk: not in enabled drivers build config 00:01:32.845 mempool/dpaa: not in enabled drivers build config 00:01:32.845 mempool/dpaa2: not in enabled drivers build config 00:01:32.845 mempool/octeontx: not in enabled drivers build config 00:01:32.845 mempool/stack: not in enabled drivers build config 00:01:32.845 dma/cnxk: not in enabled drivers build config 00:01:32.845 dma/dpaa: not in enabled drivers build config 00:01:32.845 dma/dpaa2: not in enabled drivers build config 00:01:32.845 dma/hisilicon: not in enabled drivers build config 00:01:32.845 dma/idxd: not in enabled drivers build config 00:01:32.845 dma/ioat: not in enabled drivers build config 00:01:32.845 dma/skeleton: not in enabled drivers build config 00:01:32.845 net/af_packet: not in enabled drivers build config 00:01:32.845 net/af_xdp: not in enabled drivers build config 00:01:32.845 net/ark: not in enabled drivers build config 00:01:32.845 net/atlantic: not in enabled drivers build config 00:01:32.845 net/avp: not in enabled drivers build config 00:01:32.845 net/axgbe: not in enabled drivers build config 00:01:32.845 net/bnx2x: not in enabled drivers build config 00:01:32.845 net/bnxt: not in enabled drivers build config 00:01:32.845 net/bonding: not in enabled drivers build config 00:01:32.845 net/cnxk: not in enabled drivers build config 00:01:32.845 net/cpfl: not in enabled drivers build config 00:01:32.845 net/cxgbe: not in enabled drivers build config 00:01:32.845 net/dpaa: not in enabled drivers build config 00:01:32.845 net/dpaa2: not in enabled drivers build config 00:01:32.845 net/e1000: not in enabled drivers build config 00:01:32.845 net/ena: not in enabled drivers build config 00:01:32.845 net/enetc: not in enabled drivers build config 00:01:32.845 net/enetfec: not in enabled drivers build config 00:01:32.845 net/enic: not in enabled drivers build config 00:01:32.845 net/failsafe: not in enabled drivers build config 00:01:32.845 net/fm10k: not in enabled drivers build config 00:01:32.845 net/gve: not in enabled drivers build config 00:01:32.845 net/hinic: not in enabled drivers build config 00:01:32.845 net/hns3: not in enabled drivers build config 00:01:32.845 net/i40e: not in enabled drivers build config 00:01:32.845 net/iavf: not in enabled drivers build config 00:01:32.845 net/ice: not in enabled drivers build config 00:01:32.845 net/idpf: not in enabled drivers build config 00:01:32.845 net/igc: not in enabled drivers build config 00:01:32.845 net/ionic: not in enabled drivers build config 00:01:32.845 net/ipn3ke: not in enabled drivers build config 00:01:32.845 net/ixgbe: not in enabled drivers build config 00:01:32.845 net/mana: not in enabled drivers build config 00:01:32.845 net/memif: not in enabled drivers build config 00:01:32.845 net/mlx4: not in enabled drivers build config 00:01:32.845 net/mlx5: not in enabled drivers build config 00:01:32.845 net/mvneta: not in enabled drivers build config 00:01:32.845 net/mvpp2: not in enabled drivers build config 00:01:32.845 net/netvsc: not in enabled drivers build config 00:01:32.845 net/nfb: not in enabled drivers build config 00:01:32.845 net/nfp: not in enabled drivers build config 00:01:32.845 net/ngbe: not in enabled drivers build config 00:01:32.845 net/null: not in enabled drivers build config 00:01:32.845 net/octeontx: not in enabled drivers build config 00:01:32.845 net/octeon_ep: not in enabled drivers build config 00:01:32.845 net/pcap: not in enabled drivers build config 00:01:32.845 net/pfe: not in enabled drivers build config 00:01:32.845 net/qede: not in enabled drivers build config 00:01:32.845 net/ring: not in enabled drivers build config 00:01:32.845 net/sfc: not in enabled drivers build config 00:01:32.845 net/softnic: not in enabled drivers build config 00:01:32.845 net/tap: not in enabled drivers build config 00:01:32.845 net/thunderx: not in enabled drivers build config 00:01:32.845 net/txgbe: not in enabled drivers build config 00:01:32.845 net/vdev_netvsc: not in enabled drivers build config 00:01:32.845 net/vhost: not in enabled drivers build config 00:01:32.845 net/virtio: not in enabled drivers build config 00:01:32.845 net/vmxnet3: not in enabled drivers build config 00:01:32.845 raw/*: missing internal dependency, "rawdev" 00:01:32.845 crypto/armv8: not in enabled drivers build config 00:01:32.845 crypto/bcmfs: not in enabled drivers build config 00:01:32.845 crypto/caam_jr: not in enabled drivers build config 00:01:32.845 crypto/ccp: not in enabled drivers build config 00:01:32.845 crypto/cnxk: not in enabled drivers build config 00:01:32.845 crypto/dpaa_sec: not in enabled drivers build config 00:01:32.845 crypto/dpaa2_sec: not in enabled drivers build config 00:01:32.845 crypto/ipsec_mb: not in enabled drivers build config 00:01:32.845 crypto/mlx5: not in enabled drivers build config 00:01:32.845 crypto/mvsam: not in enabled drivers build config 00:01:32.845 crypto/nitrox: not in enabled drivers build config 00:01:32.845 crypto/null: not in enabled drivers build config 00:01:32.845 crypto/octeontx: not in enabled drivers build config 00:01:32.845 crypto/openssl: not in enabled drivers build config 00:01:32.845 crypto/scheduler: not in enabled drivers build config 00:01:32.845 crypto/uadk: not in enabled drivers build config 00:01:32.845 crypto/virtio: not in enabled drivers build config 00:01:32.845 compress/isal: not in enabled drivers build config 00:01:32.845 compress/mlx5: not in enabled drivers build config 00:01:32.845 compress/octeontx: not in enabled drivers build config 00:01:32.845 compress/zlib: not in enabled drivers build config 00:01:32.845 regex/*: missing internal dependency, "regexdev" 00:01:32.845 ml/*: missing internal dependency, "mldev" 00:01:32.845 vdpa/ifc: not in enabled drivers build config 00:01:32.845 vdpa/mlx5: not in enabled drivers build config 00:01:32.845 vdpa/nfp: not in enabled drivers build config 00:01:32.845 vdpa/sfc: not in enabled drivers build config 00:01:32.845 event/*: missing internal dependency, "eventdev" 00:01:32.845 baseband/*: missing internal dependency, "bbdev" 00:01:32.845 gpu/*: missing internal dependency, "gpudev" 00:01:32.845 00:01:32.845 00:01:32.845 Build targets in project: 85 00:01:32.845 00:01:32.845 DPDK 23.11.0 00:01:32.845 00:01:32.845 User defined options 00:01:32.845 buildtype : debug 00:01:32.845 default_library : static 00:01:32.845 libdir : lib 00:01:32.845 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:32.845 c_args : -fPIC -Werror 00:01:32.845 c_link_args : 00:01:32.845 cpu_instruction_set: native 00:01:32.845 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:32.845 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:32.845 enable_docs : false 00:01:32.845 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:32.845 enable_kmods : false 00:01:32.845 tests : false 00:01:32.845 00:01:32.845 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.113 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:33.113 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:33.113 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:33.113 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:33.113 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:33.113 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:33.113 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:33.113 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:33.113 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:33.113 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:33.113 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:33.113 [11/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:33.113 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:33.113 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:33.113 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:33.113 [15/265] Linking static target lib/librte_kvargs.a 00:01:33.113 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:33.113 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:33.113 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.113 [19/265] Linking static target lib/librte_log.a 00:01:33.113 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:33.113 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:33.113 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:33.113 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:33.113 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:33.113 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:33.113 [26/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:33.113 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:33.113 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:33.113 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:33.113 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:33.113 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:33.113 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:33.113 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:33.113 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:33.113 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:33.113 [36/265] Linking static target lib/librte_pci.a 00:01:33.113 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:33.113 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:33.113 [39/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:33.113 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:33.370 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:33.370 [42/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.370 [43/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.630 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:33.630 [45/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:33.630 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:33.630 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:33.630 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:33.630 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:33.630 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:33.630 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:33.630 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:33.630 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:33.630 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:33.630 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:33.630 [56/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.630 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:33.630 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:33.630 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:33.630 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:33.630 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:33.630 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:33.630 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:33.630 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:33.630 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:33.630 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:33.630 [67/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:33.630 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:33.630 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:33.630 [70/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:33.630 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:33.630 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:33.630 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:33.630 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:33.630 [75/265] Linking static target lib/librte_telemetry.a 00:01:33.630 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:33.630 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:33.630 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:33.630 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:33.630 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:33.630 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:33.630 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:33.630 [83/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:33.630 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:33.630 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:33.630 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.630 [87/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:33.630 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:33.630 [89/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:33.630 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:33.630 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:33.630 [92/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:33.630 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:33.630 [94/265] Linking static target lib/librte_meter.a 00:01:33.630 [95/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:33.630 [96/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:33.630 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:33.630 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:33.630 [99/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:33.630 [100/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:33.630 [101/265] Linking static target lib/librte_ring.a 00:01:33.630 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:33.630 [103/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:33.630 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:33.630 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:33.630 [106/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:33.630 [107/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.630 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.630 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:33.630 [110/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:33.630 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:33.630 [112/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:33.630 [113/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.630 [114/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:33.630 [115/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.630 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:33.630 [117/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:33.630 [118/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:33.630 [119/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:33.630 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:33.630 [121/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:33.630 [122/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.630 [123/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:33.630 [124/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.630 [125/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:33.630 [126/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:33.630 [127/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.630 [128/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:33.630 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:33.630 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:33.630 [131/265] Linking static target lib/librte_timer.a 00:01:33.630 [132/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.630 [133/265] Linking static target lib/librte_cmdline.a 00:01:33.630 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:33.630 [135/265] Linking static target lib/librte_rcu.a 00:01:33.630 [136/265] Linking static target lib/librte_eal.a 00:01:33.630 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.630 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:33.630 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:33.630 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:33.630 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.630 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.630 [143/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:33.888 [144/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:33.888 [145/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:33.888 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:33.888 [147/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:33.888 [148/265] Linking static target lib/librte_net.a 00:01:33.888 [149/265] Linking static target lib/librte_reorder.a 00:01:33.888 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:33.888 [151/265] Linking target lib/librte_log.so.24.0 00:01:33.888 [152/265] Linking static target lib/librte_compressdev.a 00:01:33.888 [153/265] Linking static target lib/librte_mempool.a 00:01:33.888 [154/265] Linking static target lib/librte_dmadev.a 00:01:33.888 [155/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.888 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:33.888 [157/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.889 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:33.889 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.889 [160/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:33.889 [161/265] Linking static target lib/librte_mbuf.a 00:01:33.889 [162/265] Linking static target lib/librte_power.a 00:01:33.889 [163/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:33.889 [164/265] Linking static target lib/librte_security.a 00:01:33.889 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.889 [166/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:33.889 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.889 [168/265] Linking static target lib/librte_hash.a 00:01:33.889 [169/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.889 [170/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.889 [171/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:33.889 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:33.889 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.889 [174/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:33.889 [175/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.889 [176/265] Linking target lib/librte_kvargs.so.24.0 00:01:33.889 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.889 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.889 [179/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.889 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.889 [181/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.889 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:33.889 [183/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:34.147 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:34.147 [185/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:34.147 [186/265] Linking static target lib/librte_cryptodev.a 00:01:34.147 [187/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:34.147 [188/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:34.147 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.147 [190/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.147 [191/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.147 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:34.147 [193/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:34.147 [194/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:34.147 [195/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.147 [196/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:34.147 [197/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.147 [198/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.147 [199/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:34.147 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.147 [201/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.147 [202/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.147 [203/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:34.147 [204/265] Linking static target drivers/librte_bus_vdev.a 00:01:34.147 [205/265] Linking target lib/librte_telemetry.so.24.0 00:01:34.147 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.405 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.405 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.405 [209/265] Linking static target drivers/librte_mempool_ring.a 00:01:34.405 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:34.405 [211/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:34.405 [212/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.405 [213/265] Linking static target lib/librte_ethdev.a 00:01:34.405 [214/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:34.405 [215/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:34.405 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.405 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.662 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.662 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.662 [220/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.662 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.920 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.920 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.920 [224/265] Linking static target lib/librte_vhost.a 00:01:34.920 [225/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.177 [226/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.554 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.121 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.683 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.965 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.965 [231/265] Linking target lib/librte_eal.so.24.0 00:01:46.965 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:46.965 [233/265] Linking target lib/librte_pci.so.24.0 00:01:46.965 [234/265] Linking target lib/librte_ring.so.24.0 00:01:46.965 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:46.965 [236/265] Linking target lib/librte_timer.so.24.0 00:01:46.965 [237/265] Linking target lib/librte_meter.so.24.0 00:01:46.965 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:46.965 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:46.965 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:46.965 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:46.965 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:46.965 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:46.965 [244/265] Linking target lib/librte_mempool.so.24.0 00:01:46.965 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:46.965 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:46.965 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:46.965 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:47.224 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:47.224 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:47.224 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:47.224 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:47.224 [253/265] Linking target lib/librte_reorder.so.24.0 00:01:47.224 [254/265] Linking target lib/librte_net.so.24.0 00:01:47.224 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:47.482 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:47.482 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:47.482 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:47.483 [259/265] Linking target lib/librte_hash.so.24.0 00:01:47.483 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:47.483 [261/265] Linking target lib/librte_security.so.24.0 00:01:47.483 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:47.741 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:47.741 [264/265] Linking target lib/librte_power.so.24.0 00:01:47.741 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:47.741 INFO: autodetecting backend as ninja 00:01:47.741 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:48.674 CC lib/log/log_deprecated.o 00:01:48.674 CC lib/log/log.o 00:01:48.674 CC lib/log/log_flags.o 00:01:48.674 CC lib/ut_mock/mock.o 00:01:48.674 CC lib/ut/ut.o 00:01:48.674 LIB libspdk_ut_mock.a 00:01:48.674 LIB libspdk_log.a 00:01:48.674 LIB libspdk_ut.a 00:01:48.933 CC lib/util/bit_array.o 00:01:48.933 CC lib/util/base64.o 00:01:48.933 CC lib/util/cpuset.o 00:01:48.933 CC lib/ioat/ioat.o 00:01:48.933 CC lib/util/crc32.o 00:01:48.933 CC lib/util/crc16.o 00:01:48.933 CC lib/util/crc32c.o 00:01:48.933 CC lib/util/crc32_ieee.o 00:01:48.933 CC lib/util/fd.o 00:01:48.933 CC lib/util/crc64.o 00:01:48.933 CC lib/util/dif.o 00:01:48.933 CC lib/util/file.o 00:01:48.933 CC lib/util/hexlify.o 00:01:48.933 CC lib/util/iov.o 00:01:49.192 CC lib/util/math.o 00:01:49.192 CC lib/util/pipe.o 00:01:49.192 CC lib/dma/dma.o 00:01:49.192 CC lib/util/strerror_tls.o 00:01:49.192 CC lib/util/string.o 00:01:49.192 CC lib/util/uuid.o 00:01:49.192 CC lib/util/fd_group.o 00:01:49.192 CC lib/util/xor.o 00:01:49.192 CC lib/util/zipf.o 00:01:49.192 CXX lib/trace_parser/trace.o 00:01:49.192 CC lib/vfio_user/host/vfio_user_pci.o 00:01:49.192 CC lib/vfio_user/host/vfio_user.o 00:01:49.192 LIB libspdk_dma.a 00:01:49.192 LIB libspdk_ioat.a 00:01:49.450 LIB libspdk_vfio_user.a 00:01:49.450 LIB libspdk_util.a 00:01:49.450 LIB libspdk_trace_parser.a 00:01:49.707 CC lib/env_dpdk/env.o 00:01:49.707 CC lib/env_dpdk/memory.o 00:01:49.707 CC lib/env_dpdk/pci.o 00:01:49.707 CC lib/env_dpdk/init.o 00:01:49.707 CC lib/env_dpdk/threads.o 00:01:49.707 CC lib/env_dpdk/pci_ioat.o 00:01:49.707 CC lib/env_dpdk/pci_virtio.o 00:01:49.707 CC lib/env_dpdk/pci_vmd.o 00:01:49.707 CC lib/env_dpdk/pci_idxd.o 00:01:49.707 CC lib/env_dpdk/pci_dpdk.o 00:01:49.707 CC lib/env_dpdk/pci_event.o 00:01:49.707 CC lib/env_dpdk/sigbus_handler.o 00:01:49.707 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:49.707 CC lib/json/json_parse.o 00:01:49.707 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:49.707 CC lib/json/json_util.o 00:01:49.707 CC lib/json/json_write.o 00:01:49.707 CC lib/conf/conf.o 00:01:49.707 CC lib/vmd/vmd.o 00:01:49.707 CC lib/vmd/led.o 00:01:49.707 CC lib/idxd/idxd.o 00:01:49.707 CC lib/rdma/rdma_verbs.o 00:01:49.707 CC lib/idxd/idxd_user.o 00:01:49.707 CC lib/rdma/common.o 00:01:49.965 LIB libspdk_conf.a 00:01:49.965 LIB libspdk_json.a 00:01:49.965 LIB libspdk_rdma.a 00:01:49.965 LIB libspdk_idxd.a 00:01:49.965 LIB libspdk_vmd.a 00:01:50.222 CC lib/jsonrpc/jsonrpc_server.o 00:01:50.222 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:50.222 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:50.222 CC lib/jsonrpc/jsonrpc_client.o 00:01:50.480 LIB libspdk_jsonrpc.a 00:01:50.480 LIB libspdk_env_dpdk.a 00:01:50.738 CC lib/rpc/rpc.o 00:01:50.738 LIB libspdk_rpc.a 00:01:51.303 CC lib/notify/notify.o 00:01:51.303 CC lib/notify/notify_rpc.o 00:01:51.303 CC lib/trace/trace.o 00:01:51.303 CC lib/trace/trace_flags.o 00:01:51.303 CC lib/trace/trace_rpc.o 00:01:51.303 CC lib/keyring/keyring.o 00:01:51.303 CC lib/keyring/keyring_rpc.o 00:01:51.303 LIB libspdk_notify.a 00:01:51.303 LIB libspdk_keyring.a 00:01:51.303 LIB libspdk_trace.a 00:01:51.560 CC lib/thread/thread.o 00:01:51.560 CC lib/thread/iobuf.o 00:01:51.560 CC lib/sock/sock.o 00:01:51.560 CC lib/sock/sock_rpc.o 00:01:51.819 LIB libspdk_sock.a 00:01:52.077 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:52.077 CC lib/nvme/nvme_ctrlr.o 00:01:52.078 CC lib/nvme/nvme_ns_cmd.o 00:01:52.078 CC lib/nvme/nvme_fabric.o 00:01:52.078 CC lib/nvme/nvme_ns.o 00:01:52.078 CC lib/nvme/nvme_pcie_common.o 00:01:52.078 CC lib/nvme/nvme_qpair.o 00:01:52.078 CC lib/nvme/nvme_pcie.o 00:01:52.078 CC lib/nvme/nvme.o 00:01:52.078 CC lib/nvme/nvme_quirks.o 00:01:52.078 CC lib/nvme/nvme_transport.o 00:01:52.078 CC lib/nvme/nvme_discovery.o 00:01:52.078 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:52.078 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:52.078 CC lib/nvme/nvme_tcp.o 00:01:52.078 CC lib/nvme/nvme_opal.o 00:01:52.078 CC lib/nvme/nvme_io_msg.o 00:01:52.078 CC lib/nvme/nvme_poll_group.o 00:01:52.078 CC lib/nvme/nvme_zns.o 00:01:52.078 CC lib/nvme/nvme_stubs.o 00:01:52.078 CC lib/nvme/nvme_auth.o 00:01:52.078 CC lib/nvme/nvme_cuse.o 00:01:52.078 CC lib/nvme/nvme_vfio_user.o 00:01:52.078 CC lib/nvme/nvme_rdma.o 00:01:52.336 LIB libspdk_thread.a 00:01:52.594 CC lib/blob/blobstore.o 00:01:52.594 CC lib/accel/accel.o 00:01:52.595 CC lib/blob/request.o 00:01:52.595 CC lib/blob/zeroes.o 00:01:52.595 CC lib/accel/accel_rpc.o 00:01:52.595 CC lib/blob/blob_bs_dev.o 00:01:52.595 CC lib/accel/accel_sw.o 00:01:52.595 CC lib/vfu_tgt/tgt_endpoint.o 00:01:52.595 CC lib/vfu_tgt/tgt_rpc.o 00:01:52.595 CC lib/init/json_config.o 00:01:52.595 CC lib/init/subsystem.o 00:01:52.595 CC lib/init/subsystem_rpc.o 00:01:52.595 CC lib/init/rpc.o 00:01:52.595 CC lib/virtio/virtio.o 00:01:52.595 CC lib/virtio/virtio_vfio_user.o 00:01:52.595 CC lib/virtio/virtio_vhost_user.o 00:01:52.595 CC lib/virtio/virtio_pci.o 00:01:52.853 LIB libspdk_init.a 00:01:52.853 LIB libspdk_vfu_tgt.a 00:01:52.853 LIB libspdk_virtio.a 00:01:53.111 CC lib/event/app.o 00:01:53.111 CC lib/event/reactor.o 00:01:53.111 CC lib/event/log_rpc.o 00:01:53.111 CC lib/event/app_rpc.o 00:01:53.111 CC lib/event/scheduler_static.o 00:01:53.370 LIB libspdk_accel.a 00:01:53.370 LIB libspdk_event.a 00:01:53.370 LIB libspdk_nvme.a 00:01:53.627 CC lib/bdev/bdev.o 00:01:53.627 CC lib/bdev/bdev_rpc.o 00:01:53.627 CC lib/bdev/bdev_zone.o 00:01:53.627 CC lib/bdev/part.o 00:01:53.627 CC lib/bdev/scsi_nvme.o 00:01:54.194 LIB libspdk_blob.a 00:01:54.761 CC lib/lvol/lvol.o 00:01:54.761 CC lib/blobfs/blobfs.o 00:01:54.761 CC lib/blobfs/tree.o 00:01:55.019 LIB libspdk_lvol.a 00:01:55.277 LIB libspdk_blobfs.a 00:01:55.277 LIB libspdk_bdev.a 00:01:55.536 CC lib/scsi/dev.o 00:01:55.536 CC lib/scsi/lun.o 00:01:55.536 CC lib/scsi/port.o 00:01:55.536 CC lib/scsi/scsi.o 00:01:55.536 CC lib/scsi/scsi_bdev.o 00:01:55.536 CC lib/scsi/scsi_pr.o 00:01:55.536 CC lib/scsi/scsi_rpc.o 00:01:55.536 CC lib/scsi/task.o 00:01:55.536 CC lib/ftl/ftl_core.o 00:01:55.536 CC lib/ftl/ftl_init.o 00:01:55.536 CC lib/ftl/ftl_layout.o 00:01:55.536 CC lib/ftl/ftl_debug.o 00:01:55.536 CC lib/ftl/ftl_io.o 00:01:55.536 CC lib/ftl/ftl_sb.o 00:01:55.536 CC lib/ftl/ftl_l2p.o 00:01:55.536 CC lib/ftl/ftl_band.o 00:01:55.536 CC lib/ftl/ftl_l2p_flat.o 00:01:55.536 CC lib/ftl/ftl_nv_cache.o 00:01:55.536 CC lib/ftl/ftl_writer.o 00:01:55.536 CC lib/ftl/ftl_band_ops.o 00:01:55.536 CC lib/ftl/ftl_l2p_cache.o 00:01:55.536 CC lib/ftl/ftl_rq.o 00:01:55.536 CC lib/ftl/ftl_reloc.o 00:01:55.536 CC lib/ftl/ftl_p2l.o 00:01:55.536 CC lib/nvmf/ctrlr.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt.o 00:01:55.536 CC lib/nvmf/ctrlr_discovery.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:55.536 CC lib/nvmf/ctrlr_bdev.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:55.536 CC lib/nvmf/nvmf_rpc.o 00:01:55.536 CC lib/nvmf/subsystem.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:55.536 CC lib/nvmf/nvmf.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:55.536 CC lib/nvmf/transport.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:55.536 CC lib/nvmf/tcp.o 00:01:55.536 CC lib/nvmf/stubs.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:55.536 CC lib/nvmf/mdns_server.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:55.536 CC lib/nvmf/vfio_user.o 00:01:55.536 CC lib/nvmf/rdma.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:55.536 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:55.536 CC lib/nvmf/auth.o 00:01:55.536 CC lib/ftl/utils/ftl_conf.o 00:01:55.536 CC lib/ftl/utils/ftl_md.o 00:01:55.536 CC lib/ftl/utils/ftl_bitmap.o 00:01:55.536 CC lib/ftl/utils/ftl_property.o 00:01:55.536 CC lib/ftl/utils/ftl_mempool.o 00:01:55.536 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:55.536 CC lib/ublk/ublk.o 00:01:55.536 CC lib/ublk/ublk_rpc.o 00:01:55.536 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:55.536 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:55.536 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:55.536 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:55.536 CC lib/nbd/nbd.o 00:01:55.536 CC lib/nbd/nbd_rpc.o 00:01:55.536 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:55.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:55.536 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:55.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:55.536 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:55.536 CC lib/ftl/base/ftl_base_dev.o 00:01:55.536 CC lib/ftl/base/ftl_base_bdev.o 00:01:55.536 CC lib/ftl/ftl_trace.o 00:01:56.156 LIB libspdk_scsi.a 00:01:56.156 LIB libspdk_nbd.a 00:01:56.156 LIB libspdk_ublk.a 00:01:56.156 LIB libspdk_ftl.a 00:01:56.156 CC lib/vhost/vhost.o 00:01:56.156 CC lib/vhost/vhost_rpc.o 00:01:56.156 CC lib/vhost/vhost_blk.o 00:01:56.156 CC lib/vhost/vhost_scsi.o 00:01:56.156 CC lib/vhost/rte_vhost_user.o 00:01:56.156 CC lib/iscsi/init_grp.o 00:01:56.156 CC lib/iscsi/conn.o 00:01:56.156 CC lib/iscsi/md5.o 00:01:56.156 CC lib/iscsi/iscsi.o 00:01:56.156 CC lib/iscsi/param.o 00:01:56.156 CC lib/iscsi/portal_grp.o 00:01:56.156 CC lib/iscsi/tgt_node.o 00:01:56.156 CC lib/iscsi/iscsi_subsystem.o 00:01:56.156 CC lib/iscsi/iscsi_rpc.o 00:01:56.156 CC lib/iscsi/task.o 00:01:56.720 LIB libspdk_nvmf.a 00:01:56.979 LIB libspdk_vhost.a 00:01:56.979 LIB libspdk_iscsi.a 00:01:57.546 CC module/env_dpdk/env_dpdk_rpc.o 00:01:57.546 CC module/vfu_device/vfu_virtio.o 00:01:57.546 CC module/vfu_device/vfu_virtio_blk.o 00:01:57.546 CC module/vfu_device/vfu_virtio_scsi.o 00:01:57.546 CC module/vfu_device/vfu_virtio_rpc.o 00:01:57.546 LIB libspdk_env_dpdk_rpc.a 00:01:57.546 CC module/accel/dsa/accel_dsa_rpc.o 00:01:57.546 CC module/accel/dsa/accel_dsa.o 00:01:57.546 CC module/accel/error/accel_error.o 00:01:57.546 CC module/accel/error/accel_error_rpc.o 00:01:57.546 CC module/blob/bdev/blob_bdev.o 00:01:57.546 CC module/accel/ioat/accel_ioat_rpc.o 00:01:57.546 CC module/accel/ioat/accel_ioat.o 00:01:57.546 CC module/scheduler/gscheduler/gscheduler.o 00:01:57.546 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:57.546 CC module/keyring/file/keyring.o 00:01:57.546 CC module/accel/iaa/accel_iaa.o 00:01:57.546 CC module/keyring/file/keyring_rpc.o 00:01:57.546 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:57.546 CC module/accel/iaa/accel_iaa_rpc.o 00:01:57.546 CC module/sock/posix/posix.o 00:01:57.803 LIB libspdk_keyring_file.a 00:01:57.803 LIB libspdk_scheduler_gscheduler.a 00:01:57.803 LIB libspdk_accel_error.a 00:01:57.803 LIB libspdk_scheduler_dpdk_governor.a 00:01:57.803 LIB libspdk_accel_ioat.a 00:01:57.803 LIB libspdk_accel_iaa.a 00:01:57.803 LIB libspdk_scheduler_dynamic.a 00:01:57.803 LIB libspdk_accel_dsa.a 00:01:57.803 LIB libspdk_blob_bdev.a 00:01:57.803 LIB libspdk_vfu_device.a 00:01:58.060 LIB libspdk_sock_posix.a 00:01:58.317 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:58.317 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:58.317 CC module/blobfs/bdev/blobfs_bdev.o 00:01:58.317 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:58.317 CC module/bdev/split/vbdev_split.o 00:01:58.317 CC module/bdev/split/vbdev_split_rpc.o 00:01:58.317 CC module/bdev/raid/bdev_raid_rpc.o 00:01:58.317 CC module/bdev/delay/vbdev_delay.o 00:01:58.317 CC module/bdev/raid/bdev_raid.o 00:01:58.317 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:58.317 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:58.317 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:58.317 CC module/bdev/aio/bdev_aio.o 00:01:58.317 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:58.317 CC module/bdev/gpt/gpt.o 00:01:58.317 CC module/bdev/gpt/vbdev_gpt.o 00:01:58.317 CC module/bdev/raid/raid0.o 00:01:58.317 CC module/bdev/raid/bdev_raid_sb.o 00:01:58.317 CC module/bdev/raid/raid1.o 00:01:58.317 CC module/bdev/lvol/vbdev_lvol.o 00:01:58.317 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:58.317 CC module/bdev/aio/bdev_aio_rpc.o 00:01:58.317 CC module/bdev/raid/concat.o 00:01:58.317 CC module/bdev/passthru/vbdev_passthru.o 00:01:58.317 CC module/bdev/ftl/bdev_ftl.o 00:01:58.317 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:58.317 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:58.317 CC module/bdev/malloc/bdev_malloc.o 00:01:58.317 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:58.317 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:58.317 CC module/bdev/iscsi/bdev_iscsi.o 00:01:58.317 CC module/bdev/null/bdev_null.o 00:01:58.317 CC module/bdev/nvme/bdev_nvme.o 00:01:58.317 CC module/bdev/error/vbdev_error.o 00:01:58.317 CC module/bdev/null/bdev_null_rpc.o 00:01:58.317 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:58.317 CC module/bdev/error/vbdev_error_rpc.o 00:01:58.317 CC module/bdev/nvme/nvme_rpc.o 00:01:58.317 CC module/bdev/nvme/bdev_mdns_client.o 00:01:58.317 CC module/bdev/nvme/vbdev_opal.o 00:01:58.317 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:58.317 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:58.317 LIB libspdk_blobfs_bdev.a 00:01:58.317 LIB libspdk_bdev_split.a 00:01:58.317 LIB libspdk_bdev_gpt.a 00:01:58.317 LIB libspdk_bdev_null.a 00:01:58.317 LIB libspdk_bdev_ftl.a 00:01:58.575 LIB libspdk_bdev_passthru.a 00:01:58.575 LIB libspdk_bdev_error.a 00:01:58.575 LIB libspdk_bdev_zone_block.a 00:01:58.575 LIB libspdk_bdev_aio.a 00:01:58.575 LIB libspdk_bdev_iscsi.a 00:01:58.575 LIB libspdk_bdev_delay.a 00:01:58.575 LIB libspdk_bdev_malloc.a 00:01:58.575 LIB libspdk_bdev_lvol.a 00:01:58.575 LIB libspdk_bdev_virtio.a 00:01:58.833 LIB libspdk_bdev_raid.a 00:01:59.400 LIB libspdk_bdev_nvme.a 00:01:59.967 CC module/event/subsystems/scheduler/scheduler.o 00:01:59.967 CC module/event/subsystems/sock/sock.o 00:01:59.967 CC module/event/subsystems/iobuf/iobuf.o 00:01:59.967 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:59.967 CC module/event/subsystems/vmd/vmd.o 00:01:59.967 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:59.967 CC module/event/subsystems/keyring/keyring.o 00:01:59.967 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:59.967 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:00.226 LIB libspdk_event_sock.a 00:02:00.226 LIB libspdk_event_scheduler.a 00:02:00.226 LIB libspdk_event_keyring.a 00:02:00.226 LIB libspdk_event_iobuf.a 00:02:00.226 LIB libspdk_event_vmd.a 00:02:00.226 LIB libspdk_event_vhost_blk.a 00:02:00.226 LIB libspdk_event_vfu_tgt.a 00:02:00.484 CC module/event/subsystems/accel/accel.o 00:02:00.484 LIB libspdk_event_accel.a 00:02:01.050 CC module/event/subsystems/bdev/bdev.o 00:02:01.050 LIB libspdk_event_bdev.a 00:02:01.309 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:01.309 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:01.309 CC module/event/subsystems/scsi/scsi.o 00:02:01.309 CC module/event/subsystems/nbd/nbd.o 00:02:01.309 CC module/event/subsystems/ublk/ublk.o 00:02:01.567 LIB libspdk_event_nbd.a 00:02:01.567 LIB libspdk_event_ublk.a 00:02:01.567 LIB libspdk_event_scsi.a 00:02:01.567 LIB libspdk_event_nvmf.a 00:02:01.827 CC module/event/subsystems/iscsi/iscsi.o 00:02:01.827 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:01.827 LIB libspdk_event_vhost_scsi.a 00:02:01.827 LIB libspdk_event_iscsi.a 00:02:02.086 CC test/rpc_client/rpc_client_test.o 00:02:02.086 TEST_HEADER include/spdk/accel.h 00:02:02.086 TEST_HEADER include/spdk/accel_module.h 00:02:02.086 TEST_HEADER include/spdk/assert.h 00:02:02.086 TEST_HEADER include/spdk/base64.h 00:02:02.086 TEST_HEADER include/spdk/bdev.h 00:02:02.086 TEST_HEADER include/spdk/barrier.h 00:02:02.086 TEST_HEADER include/spdk/bdev_module.h 00:02:02.086 TEST_HEADER include/spdk/bdev_zone.h 00:02:02.086 TEST_HEADER include/spdk/bit_array.h 00:02:02.086 TEST_HEADER include/spdk/bit_pool.h 00:02:02.086 TEST_HEADER include/spdk/blob_bdev.h 00:02:02.086 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:02.086 TEST_HEADER include/spdk/blob.h 00:02:02.086 TEST_HEADER include/spdk/blobfs.h 00:02:02.086 TEST_HEADER include/spdk/config.h 00:02:02.086 TEST_HEADER include/spdk/conf.h 00:02:02.086 TEST_HEADER include/spdk/cpuset.h 00:02:02.354 TEST_HEADER include/spdk/crc32.h 00:02:02.354 TEST_HEADER include/spdk/crc16.h 00:02:02.354 TEST_HEADER include/spdk/crc64.h 00:02:02.354 TEST_HEADER include/spdk/dif.h 00:02:02.354 TEST_HEADER include/spdk/dma.h 00:02:02.354 TEST_HEADER include/spdk/endian.h 00:02:02.354 TEST_HEADER include/spdk/env_dpdk.h 00:02:02.354 TEST_HEADER include/spdk/env.h 00:02:02.354 TEST_HEADER include/spdk/event.h 00:02:02.354 TEST_HEADER include/spdk/fd_group.h 00:02:02.354 TEST_HEADER include/spdk/file.h 00:02:02.354 TEST_HEADER include/spdk/fd.h 00:02:02.354 TEST_HEADER include/spdk/ftl.h 00:02:02.354 TEST_HEADER include/spdk/gpt_spec.h 00:02:02.354 TEST_HEADER include/spdk/histogram_data.h 00:02:02.354 TEST_HEADER include/spdk/hexlify.h 00:02:02.354 TEST_HEADER include/spdk/idxd.h 00:02:02.354 CC app/trace_record/trace_record.o 00:02:02.354 TEST_HEADER include/spdk/idxd_spec.h 00:02:02.354 TEST_HEADER include/spdk/init.h 00:02:02.354 TEST_HEADER include/spdk/ioat.h 00:02:02.354 TEST_HEADER include/spdk/ioat_spec.h 00:02:02.354 TEST_HEADER include/spdk/iscsi_spec.h 00:02:02.354 TEST_HEADER include/spdk/json.h 00:02:02.354 TEST_HEADER include/spdk/jsonrpc.h 00:02:02.354 CXX app/trace/trace.o 00:02:02.354 TEST_HEADER include/spdk/keyring.h 00:02:02.354 CC app/spdk_top/spdk_top.o 00:02:02.354 TEST_HEADER include/spdk/keyring_module.h 00:02:02.354 TEST_HEADER include/spdk/likely.h 00:02:02.354 CC app/spdk_nvme_perf/perf.o 00:02:02.354 TEST_HEADER include/spdk/log.h 00:02:02.354 TEST_HEADER include/spdk/memory.h 00:02:02.354 CC app/spdk_lspci/spdk_lspci.o 00:02:02.354 TEST_HEADER include/spdk/mmio.h 00:02:02.354 TEST_HEADER include/spdk/lvol.h 00:02:02.354 TEST_HEADER include/spdk/notify.h 00:02:02.354 TEST_HEADER include/spdk/nbd.h 00:02:02.354 TEST_HEADER include/spdk/nvme.h 00:02:02.354 CC app/spdk_nvme_discover/discovery_aer.o 00:02:02.354 TEST_HEADER include/spdk/nvme_intel.h 00:02:02.354 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:02.354 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:02.354 TEST_HEADER include/spdk/nvme_zns.h 00:02:02.354 TEST_HEADER include/spdk/nvme_spec.h 00:02:02.354 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:02.354 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:02.354 TEST_HEADER include/spdk/nvmf.h 00:02:02.354 TEST_HEADER include/spdk/nvmf_spec.h 00:02:02.354 TEST_HEADER include/spdk/nvmf_transport.h 00:02:02.354 TEST_HEADER include/spdk/opal.h 00:02:02.354 CC app/spdk_nvme_identify/identify.o 00:02:02.354 TEST_HEADER include/spdk/opal_spec.h 00:02:02.354 TEST_HEADER include/spdk/pci_ids.h 00:02:02.354 TEST_HEADER include/spdk/pipe.h 00:02:02.354 TEST_HEADER include/spdk/queue.h 00:02:02.354 TEST_HEADER include/spdk/reduce.h 00:02:02.354 TEST_HEADER include/spdk/rpc.h 00:02:02.354 TEST_HEADER include/spdk/scheduler.h 00:02:02.354 TEST_HEADER include/spdk/scsi.h 00:02:02.355 TEST_HEADER include/spdk/scsi_spec.h 00:02:02.355 TEST_HEADER include/spdk/sock.h 00:02:02.355 TEST_HEADER include/spdk/stdinc.h 00:02:02.355 TEST_HEADER include/spdk/string.h 00:02:02.355 TEST_HEADER include/spdk/thread.h 00:02:02.355 TEST_HEADER include/spdk/trace.h 00:02:02.355 TEST_HEADER include/spdk/trace_parser.h 00:02:02.355 TEST_HEADER include/spdk/tree.h 00:02:02.355 TEST_HEADER include/spdk/ublk.h 00:02:02.355 TEST_HEADER include/spdk/util.h 00:02:02.355 TEST_HEADER include/spdk/uuid.h 00:02:02.355 TEST_HEADER include/spdk/version.h 00:02:02.355 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:02.355 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:02.355 TEST_HEADER include/spdk/vhost.h 00:02:02.355 TEST_HEADER include/spdk/vmd.h 00:02:02.355 TEST_HEADER include/spdk/xor.h 00:02:02.355 TEST_HEADER include/spdk/zipf.h 00:02:02.355 CXX test/cpp_headers/accel_module.o 00:02:02.355 CXX test/cpp_headers/accel.o 00:02:02.355 CXX test/cpp_headers/assert.o 00:02:02.355 CXX test/cpp_headers/barrier.o 00:02:02.355 CXX test/cpp_headers/base64.o 00:02:02.355 CXX test/cpp_headers/bdev_module.o 00:02:02.355 CXX test/cpp_headers/bdev.o 00:02:02.355 CC app/nvmf_tgt/nvmf_main.o 00:02:02.355 CXX test/cpp_headers/bit_array.o 00:02:02.355 CXX test/cpp_headers/bdev_zone.o 00:02:02.355 CXX test/cpp_headers/bit_pool.o 00:02:02.355 CXX test/cpp_headers/blob_bdev.o 00:02:02.355 CXX test/cpp_headers/blobfs.o 00:02:02.355 CXX test/cpp_headers/blobfs_bdev.o 00:02:02.355 CXX test/cpp_headers/blob.o 00:02:02.355 CXX test/cpp_headers/conf.o 00:02:02.355 CXX test/cpp_headers/cpuset.o 00:02:02.355 CXX test/cpp_headers/config.o 00:02:02.355 CC app/iscsi_tgt/iscsi_tgt.o 00:02:02.355 CXX test/cpp_headers/crc16.o 00:02:02.355 CXX test/cpp_headers/crc32.o 00:02:02.355 CXX test/cpp_headers/dif.o 00:02:02.355 CXX test/cpp_headers/crc64.o 00:02:02.355 CXX test/cpp_headers/dma.o 00:02:02.355 CXX test/cpp_headers/endian.o 00:02:02.355 CXX test/cpp_headers/env_dpdk.o 00:02:02.355 CXX test/cpp_headers/env.o 00:02:02.355 CC app/vhost/vhost.o 00:02:02.355 CXX test/cpp_headers/event.o 00:02:02.355 CXX test/cpp_headers/fd.o 00:02:02.355 CXX test/cpp_headers/fd_group.o 00:02:02.355 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:02.355 CXX test/cpp_headers/ftl.o 00:02:02.355 CXX test/cpp_headers/file.o 00:02:02.355 CC app/spdk_tgt/spdk_tgt.o 00:02:02.355 CXX test/cpp_headers/hexlify.o 00:02:02.355 CXX test/cpp_headers/gpt_spec.o 00:02:02.355 CC app/spdk_dd/spdk_dd.o 00:02:02.355 CXX test/cpp_headers/histogram_data.o 00:02:02.355 CXX test/cpp_headers/idxd_spec.o 00:02:02.355 CXX test/cpp_headers/idxd.o 00:02:02.355 CXX test/cpp_headers/init.o 00:02:02.355 CC test/event/reactor/reactor.o 00:02:02.355 CC test/app/histogram_perf/histogram_perf.o 00:02:02.355 CC test/event/event_perf/event_perf.o 00:02:02.355 CC test/app/jsoncat/jsoncat.o 00:02:02.355 CC test/event/reactor_perf/reactor_perf.o 00:02:02.355 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:02.355 CC test/env/vtophys/vtophys.o 00:02:02.355 CC test/env/memory/memory_ut.o 00:02:02.355 CC test/nvme/sgl/sgl.o 00:02:02.355 CC test/nvme/reset/reset.o 00:02:02.355 CC test/nvme/overhead/overhead.o 00:02:02.355 CC test/event/app_repeat/app_repeat.o 00:02:02.355 CC test/nvme/e2edp/nvme_dp.o 00:02:02.355 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:02.355 CC examples/ioat/verify/verify.o 00:02:02.355 CC test/thread/lock/spdk_lock.o 00:02:02.355 CC test/nvme/aer/aer.o 00:02:02.355 CC test/nvme/connect_stress/connect_stress.o 00:02:02.355 CC test/nvme/boot_partition/boot_partition.o 00:02:02.355 CC test/nvme/err_injection/err_injection.o 00:02:02.355 CC test/nvme/reserve/reserve.o 00:02:02.355 CC examples/ioat/perf/perf.o 00:02:02.355 CC test/nvme/fused_ordering/fused_ordering.o 00:02:02.355 CC test/bdev/bdevio/bdevio.o 00:02:02.355 CC test/nvme/compliance/nvme_compliance.o 00:02:02.355 CC test/nvme/simple_copy/simple_copy.o 00:02:02.355 CC test/nvme/fdp/fdp.o 00:02:02.355 CXX test/cpp_headers/ioat.o 00:02:02.355 CC test/app/stub/stub.o 00:02:02.355 CC test/nvme/startup/startup.o 00:02:02.355 CC test/env/pci/pci_ut.o 00:02:02.355 CC test/blobfs/mkfs/mkfs.o 00:02:02.355 CC test/thread/poller_perf/poller_perf.o 00:02:02.355 CC examples/idxd/perf/perf.o 00:02:02.355 CC test/event/scheduler/scheduler.o 00:02:02.355 CC test/nvme/cuse/cuse.o 00:02:02.355 CC examples/accel/perf/accel_perf.o 00:02:02.355 CC examples/vmd/lsvmd/lsvmd.o 00:02:02.355 CC examples/vmd/led/led.o 00:02:02.355 CC app/fio/nvme/fio_plugin.o 00:02:02.355 CC examples/util/zipf/zipf.o 00:02:02.355 CC examples/sock/hello_world/hello_sock.o 00:02:02.355 CC examples/nvme/reconnect/reconnect.o 00:02:02.355 CC examples/nvme/hotplug/hotplug.o 00:02:02.355 CC test/app/bdev_svc/bdev_svc.o 00:02:02.355 CC examples/nvme/arbitration/arbitration.o 00:02:02.355 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:02.355 CC examples/nvme/hello_world/hello_world.o 00:02:02.355 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:02.355 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:02.355 CC examples/nvme/abort/abort.o 00:02:02.355 LINK spdk_lspci 00:02:02.355 CC test/dma/test_dma/test_dma.o 00:02:02.355 LINK rpc_client_test 00:02:02.355 CC test/accel/dif/dif.o 00:02:02.355 CC examples/blob/cli/blobcli.o 00:02:02.355 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.355 CC examples/blob/hello_world/hello_blob.o 00:02:02.355 CC examples/bdev/hello_world/hello_bdev.o 00:02:02.355 CC examples/nvmf/nvmf/nvmf.o 00:02:02.355 CC examples/bdev/bdevperf/bdevperf.o 00:02:02.355 CC examples/thread/thread/thread_ex.o 00:02:02.355 CC app/fio/bdev/fio_plugin.o 00:02:02.619 CC test/lvol/esnap/esnap.o 00:02:02.619 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.619 LINK spdk_nvme_discover 00:02:02.619 CXX test/cpp_headers/ioat_spec.o 00:02:02.619 CXX test/cpp_headers/iscsi_spec.o 00:02:02.619 LINK reactor 00:02:02.619 CXX test/cpp_headers/json.o 00:02:02.619 CXX test/cpp_headers/jsonrpc.o 00:02:02.619 LINK histogram_perf 00:02:02.619 LINK event_perf 00:02:02.619 CXX test/cpp_headers/keyring.o 00:02:02.619 LINK jsoncat 00:02:02.619 CXX test/cpp_headers/keyring_module.o 00:02:02.619 CXX test/cpp_headers/likely.o 00:02:02.619 CXX test/cpp_headers/log.o 00:02:02.619 LINK spdk_trace_record 00:02:02.619 CXX test/cpp_headers/lvol.o 00:02:02.619 CXX test/cpp_headers/memory.o 00:02:02.619 CXX test/cpp_headers/mmio.o 00:02:02.619 CXX test/cpp_headers/nbd.o 00:02:02.619 LINK reactor_perf 00:02:02.619 CXX test/cpp_headers/notify.o 00:02:02.619 CXX test/cpp_headers/nvme.o 00:02:02.619 CXX test/cpp_headers/nvme_intel.o 00:02:02.619 CXX test/cpp_headers/nvme_ocssd.o 00:02:02.619 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:02.619 CXX test/cpp_headers/nvme_spec.o 00:02:02.619 CXX test/cpp_headers/nvme_zns.o 00:02:02.619 LINK app_repeat 00:02:02.619 LINK nvmf_tgt 00:02:02.619 CXX test/cpp_headers/nvmf_cmd.o 00:02:02.619 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:02.619 CXX test/cpp_headers/nvmf.o 00:02:02.619 LINK vhost 00:02:02.619 LINK vtophys 00:02:02.619 CXX test/cpp_headers/nvmf_spec.o 00:02:02.619 CXX test/cpp_headers/nvmf_transport.o 00:02:02.619 CXX test/cpp_headers/opal.o 00:02:02.619 CXX test/cpp_headers/opal_spec.o 00:02:02.619 CXX test/cpp_headers/pci_ids.o 00:02:02.619 CXX test/cpp_headers/pipe.o 00:02:02.619 CXX test/cpp_headers/queue.o 00:02:02.619 CXX test/cpp_headers/reduce.o 00:02:02.619 LINK lsvmd 00:02:02.619 LINK env_dpdk_post_init 00:02:02.619 LINK poller_perf 00:02:02.619 LINK interrupt_tgt 00:02:02.619 LINK led 00:02:02.619 LINK iscsi_tgt 00:02:02.619 LINK zipf 00:02:02.619 CXX test/cpp_headers/rpc.o 00:02:02.619 LINK connect_stress 00:02:02.619 CXX test/cpp_headers/scheduler.o 00:02:02.619 LINK boot_partition 00:02:02.619 CXX test/cpp_headers/scsi.o 00:02:02.619 LINK doorbell_aers 00:02:02.619 LINK err_injection 00:02:02.619 LINK spdk_tgt 00:02:02.619 LINK startup 00:02:02.619 LINK stub 00:02:02.619 LINK bdev_svc 00:02:02.619 LINK fused_ordering 00:02:02.619 CXX test/cpp_headers/scsi_spec.o 00:02:02.619 LINK reserve 00:02:02.619 LINK pmr_persistence 00:02:02.619 LINK verify 00:02:02.619 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.619 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:02.619 struct spdk_nvme_fdp_ruhs ruhs; 00:02:02.619 ^ 00:02:02.619 LINK mkfs 00:02:02.619 LINK ioat_perf 00:02:02.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.619 LINK cmb_copy 00:02:02.619 LINK simple_copy 00:02:02.619 LINK hello_world 00:02:02.619 LINK hotplug 00:02:02.619 LINK hello_sock 00:02:02.619 LINK nvme_dp 00:02:02.619 LINK reset 00:02:02.885 LINK scheduler 00:02:02.885 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:02.885 LINK sgl 00:02:02.885 LINK aer 00:02:02.885 LINK overhead 00:02:02.885 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:02.885 LINK fdp 00:02:02.885 CXX test/cpp_headers/sock.o 00:02:02.885 CXX test/cpp_headers/stdinc.o 00:02:02.885 CXX test/cpp_headers/string.o 00:02:02.885 LINK hello_blob 00:02:02.885 CXX test/cpp_headers/thread.o 00:02:02.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.885 CXX test/cpp_headers/trace.o 00:02:02.885 LINK hello_bdev 00:02:02.885 CXX test/cpp_headers/trace_parser.o 00:02:02.885 LINK thread 00:02:02.885 CXX test/cpp_headers/tree.o 00:02:02.885 CXX test/cpp_headers/ublk.o 00:02:02.885 CXX test/cpp_headers/util.o 00:02:02.885 CXX test/cpp_headers/uuid.o 00:02:02.885 CXX test/cpp_headers/version.o 00:02:02.885 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.885 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.885 CXX test/cpp_headers/vhost.o 00:02:02.885 CXX test/cpp_headers/vmd.o 00:02:02.885 CXX test/cpp_headers/xor.o 00:02:02.885 LINK spdk_trace 00:02:02.885 CXX test/cpp_headers/zipf.o 00:02:02.885 LINK idxd_perf 00:02:02.885 LINK reconnect 00:02:02.885 LINK nvmf 00:02:02.885 LINK bdevio 00:02:02.885 LINK arbitration 00:02:02.885 LINK test_dma 00:02:02.885 LINK abort 00:02:02.885 LINK spdk_dd 00:02:03.144 LINK pci_ut 00:02:03.144 LINK dif 00:02:03.144 LINK accel_perf 00:02:03.144 LINK nvme_manage 00:02:03.144 LINK nvme_compliance 00:02:03.144 LINK blobcli 00:02:03.144 LINK nvme_fuzz 00:02:03.144 1 warning generated. 00:02:03.144 LINK mem_callbacks 00:02:03.144 LINK llvm_vfio_fuzz 00:02:03.144 LINK spdk_bdev 00:02:03.144 LINK spdk_nvme 00:02:03.403 LINK spdk_nvme_identify 00:02:03.403 LINK bdevperf 00:02:03.403 LINK memory_ut 00:02:03.403 LINK spdk_nvme_perf 00:02:03.403 LINK vhost_fuzz 00:02:03.403 LINK spdk_top 00:02:03.403 LINK cuse 00:02:03.662 LINK llvm_nvme_fuzz 00:02:03.920 LINK spdk_lock 00:02:04.178 LINK iscsi_fuzz 00:02:06.710 LINK esnap 00:02:06.710 00:02:06.710 real 0m42.037s 00:02:06.710 user 6m9.989s 00:02:06.710 sys 2m43.469s 00:02:06.710 10:54:03 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:06.710 10:54:03 make -- common/autotest_common.sh@10 -- $ set +x 00:02:06.710 ************************************ 00:02:06.710 END TEST make 00:02:06.710 ************************************ 00:02:06.710 10:54:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.710 10:54:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:06.710 10:54:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:06.710 10:54:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.710 10:54:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.710 10:54:03 -- pm/common@44 -- $ pid=1270175 00:02:06.710 10:54:03 -- pm/common@50 -- $ kill -TERM 1270175 00:02:06.710 10:54:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.710 10:54:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.710 10:54:03 -- pm/common@44 -- $ pid=1270177 00:02:06.710 10:54:03 -- pm/common@50 -- $ kill -TERM 1270177 00:02:06.710 10:54:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.710 10:54:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.710 10:54:03 -- pm/common@44 -- $ pid=1270179 00:02:06.710 10:54:03 -- pm/common@50 -- $ kill -TERM 1270179 00:02:06.710 10:54:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.710 10:54:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.710 10:54:03 -- pm/common@44 -- $ pid=1270209 00:02:06.710 10:54:03 -- pm/common@50 -- $ sudo -E kill -TERM 1270209 00:02:06.710 10:54:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:06.710 10:54:03 -- nvmf/common.sh@7 -- # uname -s 00:02:06.710 10:54:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:06.710 10:54:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:06.710 10:54:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:06.710 10:54:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:06.710 10:54:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:06.710 10:54:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:06.710 10:54:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:06.710 10:54:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:06.710 10:54:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:06.968 10:54:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:06.968 10:54:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:06.968 10:54:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:06.969 10:54:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:06.969 10:54:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:06.969 10:54:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:06.969 10:54:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:06.969 10:54:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:06.969 10:54:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:06.969 10:54:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.969 10:54:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.969 10:54:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.969 10:54:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.969 10:54:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.969 10:54:03 -- paths/export.sh@5 -- # export PATH 00:02:06.969 10:54:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.969 10:54:03 -- nvmf/common.sh@47 -- # : 0 00:02:06.969 10:54:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:06.969 10:54:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:06.969 10:54:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:06.969 10:54:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:06.969 10:54:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:06.969 10:54:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:06.969 10:54:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:06.969 10:54:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:06.969 10:54:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:06.969 10:54:03 -- spdk/autotest.sh@32 -- # uname -s 00:02:06.969 10:54:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:06.969 10:54:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:06.969 10:54:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:06.969 10:54:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:06.969 10:54:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:06.969 10:54:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:06.969 10:54:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:06.969 10:54:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:06.969 10:54:04 -- spdk/autotest.sh@48 -- # udevadm_pid=1331551 00:02:06.969 10:54:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:06.969 10:54:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:06.969 10:54:04 -- pm/common@17 -- # local monitor 00:02:06.969 10:54:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.969 10:54:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.969 10:54:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.969 10:54:04 -- pm/common@21 -- # date +%s 00:02:06.969 10:54:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.969 10:54:04 -- pm/common@21 -- # date +%s 00:02:06.969 10:54:04 -- pm/common@25 -- # sleep 1 00:02:06.969 10:54:04 -- pm/common@21 -- # date +%s 00:02:06.969 10:54:04 -- pm/common@21 -- # date +%s 00:02:06.969 10:54:04 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763244 00:02:06.969 10:54:04 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763244 00:02:06.969 10:54:04 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763244 00:02:06.969 10:54:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763244 00:02:06.969 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763244_collect-vmstat.pm.log 00:02:06.969 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763244_collect-cpu-temp.pm.log 00:02:06.969 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763244_collect-cpu-load.pm.log 00:02:06.969 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715763244_collect-bmc-pm.bmc.pm.log 00:02:07.905 10:54:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.905 10:54:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.905 10:54:05 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:07.905 10:54:05 -- common/autotest_common.sh@10 -- # set +x 00:02:07.905 10:54:05 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.905 10:54:05 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:07.905 10:54:05 -- common/autotest_common.sh@10 -- # set +x 00:02:07.905 10:54:05 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:07.905 10:54:05 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:07.905 10:54:05 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:07.905 10:54:05 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:07.905 10:54:05 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:07.905 10:54:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:07.905 10:54:05 -- common/autotest_common.sh@1452 -- # uname 00:02:07.905 10:54:05 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:07.905 10:54:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:07.905 10:54:05 -- common/autotest_common.sh@1472 -- # uname 00:02:07.905 10:54:05 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:07.905 10:54:05 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:07.905 10:54:05 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:07.905 10:54:05 -- spdk/autotest.sh@72 -- # hash lcov 00:02:07.905 10:54:05 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:07.905 10:54:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:07.905 10:54:05 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:07.905 10:54:05 -- common/autotest_common.sh@10 -- # set +x 00:02:07.905 10:54:05 -- spdk/autotest.sh@91 -- # rm -f 00:02:07.905 10:54:05 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:11.197 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:11.197 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:11.198 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:11.198 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:11.198 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:11.457 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:11.717 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:11.717 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:11.717 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:11.717 10:54:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:11.717 10:54:08 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:11.717 10:54:08 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:11.717 10:54:08 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:11.717 10:54:08 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:11.717 10:54:08 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:11.717 10:54:08 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:11.717 10:54:08 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:11.717 10:54:08 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:11.717 10:54:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:11.717 10:54:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:11.717 10:54:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:11.717 10:54:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:11.717 10:54:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:11.717 10:54:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:11.717 No valid GPT data, bailing 00:02:11.717 10:54:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:11.717 10:54:08 -- scripts/common.sh@391 -- # pt= 00:02:11.717 10:54:08 -- scripts/common.sh@392 -- # return 1 00:02:11.717 10:54:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:11.717 1+0 records in 00:02:11.717 1+0 records out 00:02:11.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0019406 s, 540 MB/s 00:02:11.717 10:54:08 -- spdk/autotest.sh@118 -- # sync 00:02:11.717 10:54:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:11.717 10:54:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:11.717 10:54:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:19.844 10:54:15 -- spdk/autotest.sh@124 -- # uname -s 00:02:19.844 10:54:15 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:19.844 10:54:15 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:19.844 10:54:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:19.844 10:54:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:19.844 10:54:15 -- common/autotest_common.sh@10 -- # set +x 00:02:19.844 ************************************ 00:02:19.844 START TEST setup.sh 00:02:19.844 ************************************ 00:02:19.844 10:54:15 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:19.844 * Looking for test storage... 00:02:19.844 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:19.844 10:54:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:19.844 10:54:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:19.844 10:54:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:19.844 10:54:16 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:19.844 10:54:16 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:19.844 10:54:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:19.844 ************************************ 00:02:19.844 START TEST acl 00:02:19.844 ************************************ 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:19.844 * Looking for test storage... 00:02:19.844 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:19.844 10:54:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:19.844 10:54:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:19.844 10:54:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:19.844 10:54:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:19.844 10:54:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:19.844 10:54:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:19.844 10:54:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:19.844 10:54:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:19.844 10:54:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.137 10:54:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:23.137 10:54:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:23.137 10:54:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.137 10:54:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:23.137 10:54:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.137 10:54:20 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:26.426 Hugepages 00:02:26.426 node hugesize free / total 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 00:02:26.426 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.426 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:26.427 10:54:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:26.427 10:54:23 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:26.427 10:54:23 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:26.427 10:54:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:26.427 ************************************ 00:02:26.427 START TEST denied 00:02:26.427 ************************************ 00:02:26.427 10:54:23 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:26.427 10:54:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:26.427 10:54:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:26.427 10:54:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:26.427 10:54:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.427 10:54:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:29.715 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:29.715 10:54:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.030 00:02:35.030 real 0m7.970s 00:02:35.030 user 0m2.622s 00:02:35.030 sys 0m4.714s 00:02:35.030 10:54:31 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:35.030 10:54:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:35.030 ************************************ 00:02:35.030 END TEST denied 00:02:35.030 ************************************ 00:02:35.030 10:54:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:35.030 10:54:31 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:35.030 10:54:31 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:35.030 10:54:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:35.030 ************************************ 00:02:35.030 START TEST allowed 00:02:35.030 ************************************ 00:02:35.030 10:54:31 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:02:35.030 10:54:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:35.030 10:54:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:35.030 10:54:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:35.030 10:54:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.030 10:54:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:39.220 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:39.220 10:54:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:39.220 10:54:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:39.220 10:54:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:39.220 10:54:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.220 10:54:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.414 00:02:43.414 real 0m8.359s 00:02:43.414 user 0m2.317s 00:02:43.414 sys 0m4.545s 00:02:43.414 10:54:39 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:43.414 10:54:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:43.414 ************************************ 00:02:43.414 END TEST allowed 00:02:43.414 ************************************ 00:02:43.414 00:02:43.414 real 0m23.849s 00:02:43.414 user 0m7.526s 00:02:43.414 sys 0m14.448s 00:02:43.414 10:54:39 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:43.414 10:54:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:43.414 ************************************ 00:02:43.414 END TEST acl 00:02:43.414 ************************************ 00:02:43.414 10:54:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:43.414 10:54:39 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:43.414 10:54:39 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:43.414 10:54:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:43.414 ************************************ 00:02:43.414 START TEST hugepages 00:02:43.414 ************************************ 00:02:43.414 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:43.414 * Looking for test storage... 00:02:43.414 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40558356 kB' 'MemAvailable: 42203992 kB' 'Buffers: 2716 kB' 'Cached: 11259200 kB' 'SwapCached: 20048 kB' 'Active: 6953308 kB' 'Inactive: 4909708 kB' 'Active(anon): 6498728 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584996 kB' 'Mapped: 196412 kB' 'Shmem: 9124296 kB' 'KReclaimable: 309932 kB' 'Slab: 925416 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 615484 kB' 'KernelStack: 21904 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 11090672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216344 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.414 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.415 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:43.416 10:54:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:43.416 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:43.416 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:43.416 10:54:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.416 ************************************ 00:02:43.416 START TEST default_setup 00:02:43.416 ************************************ 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.416 10:54:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:46.704 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:46.704 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:48.089 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42755112 kB' 'MemAvailable: 44400748 kB' 'Buffers: 2716 kB' 'Cached: 11259332 kB' 'SwapCached: 20048 kB' 'Active: 6967688 kB' 'Inactive: 4909708 kB' 'Active(anon): 6513108 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598560 kB' 'Mapped: 196500 kB' 'Shmem: 9124428 kB' 'KReclaimable: 309932 kB' 'Slab: 923060 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 613128 kB' 'KernelStack: 22144 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11103700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.089 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.090 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42756944 kB' 'MemAvailable: 44402580 kB' 'Buffers: 2716 kB' 'Cached: 11259336 kB' 'SwapCached: 20048 kB' 'Active: 6966864 kB' 'Inactive: 4909708 kB' 'Active(anon): 6512284 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598224 kB' 'Mapped: 196484 kB' 'Shmem: 9124432 kB' 'KReclaimable: 309932 kB' 'Slab: 922932 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 613000 kB' 'KernelStack: 21936 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11103720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.091 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.092 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42755704 kB' 'MemAvailable: 44401340 kB' 'Buffers: 2716 kB' 'Cached: 11259352 kB' 'SwapCached: 20048 kB' 'Active: 6967696 kB' 'Inactive: 4909708 kB' 'Active(anon): 6513116 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598548 kB' 'Mapped: 196484 kB' 'Shmem: 9124448 kB' 'KReclaimable: 309932 kB' 'Slab: 922900 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 612968 kB' 'KernelStack: 22144 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11103740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.093 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.094 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.095 nr_hugepages=1024 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.095 resv_hugepages=0 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.095 surplus_hugepages=0 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.095 anon_hugepages=0 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42755700 kB' 'MemAvailable: 44401336 kB' 'Buffers: 2716 kB' 'Cached: 11259376 kB' 'SwapCached: 20048 kB' 'Active: 6967788 kB' 'Inactive: 4909708 kB' 'Active(anon): 6513208 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598600 kB' 'Mapped: 196484 kB' 'Shmem: 9124472 kB' 'KReclaimable: 309932 kB' 'Slab: 922900 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 612968 kB' 'KernelStack: 22000 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11103764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.095 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:48.096 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20775812 kB' 'MemUsed: 11863328 kB' 'SwapCached: 17412 kB' 'Active: 3980688 kB' 'Inactive: 4022624 kB' 'Active(anon): 3934216 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585624 kB' 'Mapped: 113388 kB' 'AnonPages: 420792 kB' 'Shmem: 6718828 kB' 'KernelStack: 12408 kB' 'PageTables: 5372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201256 kB' 'Slab: 526600 kB' 'SReclaimable: 201256 kB' 'SUnreclaim: 325344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.097 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:48.098 node0=1024 expecting 1024 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:48.098 00:02:48.098 real 0m5.126s 00:02:48.098 user 0m1.365s 00:02:48.098 sys 0m2.340s 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:48.098 10:54:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:48.098 ************************************ 00:02:48.098 END TEST default_setup 00:02:48.098 ************************************ 00:02:48.358 10:54:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:48.358 10:54:45 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:48.358 10:54:45 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:48.358 10:54:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.358 ************************************ 00:02:48.358 START TEST per_node_1G_alloc 00:02:48.358 ************************************ 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.358 10:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:51.656 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.656 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42782988 kB' 'MemAvailable: 44428624 kB' 'Buffers: 2716 kB' 'Cached: 11259484 kB' 'SwapCached: 20048 kB' 'Active: 6966676 kB' 'Inactive: 4909708 kB' 'Active(anon): 6512096 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596840 kB' 'Mapped: 195400 kB' 'Shmem: 9124580 kB' 'KReclaimable: 309932 kB' 'Slab: 923672 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 613740 kB' 'KernelStack: 21936 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11094276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.656 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.657 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42783668 kB' 'MemAvailable: 44429304 kB' 'Buffers: 2716 kB' 'Cached: 11259488 kB' 'SwapCached: 20048 kB' 'Active: 6966536 kB' 'Inactive: 4909708 kB' 'Active(anon): 6511956 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597212 kB' 'Mapped: 195324 kB' 'Shmem: 9124584 kB' 'KReclaimable: 309932 kB' 'Slab: 923648 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 613716 kB' 'KernelStack: 21952 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11094296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.658 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.659 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42784740 kB' 'MemAvailable: 44430376 kB' 'Buffers: 2716 kB' 'Cached: 11259488 kB' 'SwapCached: 20048 kB' 'Active: 6966576 kB' 'Inactive: 4909708 kB' 'Active(anon): 6511996 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597248 kB' 'Mapped: 195324 kB' 'Shmem: 9124584 kB' 'KReclaimable: 309932 kB' 'Slab: 923648 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 613716 kB' 'KernelStack: 21968 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11094316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.660 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.661 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.662 nr_hugepages=1024 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.662 resv_hugepages=0 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.662 surplus_hugepages=0 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.662 anon_hugepages=0 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.662 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42785216 kB' 'MemAvailable: 44430852 kB' 'Buffers: 2716 kB' 'Cached: 11259488 kB' 'SwapCached: 20048 kB' 'Active: 6966716 kB' 'Inactive: 4909708 kB' 'Active(anon): 6512136 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597388 kB' 'Mapped: 195324 kB' 'Shmem: 9124584 kB' 'KReclaimable: 309932 kB' 'Slab: 923648 kB' 'SReclaimable: 309932 kB' 'SUnreclaim: 613716 kB' 'KernelStack: 21952 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11094340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.663 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.664 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21837392 kB' 'MemUsed: 10801748 kB' 'SwapCached: 17412 kB' 'Active: 3981288 kB' 'Inactive: 4022624 kB' 'Active(anon): 3934816 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585656 kB' 'Mapped: 112248 kB' 'AnonPages: 421404 kB' 'Shmem: 6718860 kB' 'KernelStack: 12280 kB' 'PageTables: 4924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201256 kB' 'Slab: 527432 kB' 'SReclaimable: 201256 kB' 'SUnreclaim: 326176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.665 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.666 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20947608 kB' 'MemUsed: 6708472 kB' 'SwapCached: 2636 kB' 'Active: 2984984 kB' 'Inactive: 887084 kB' 'Active(anon): 2576876 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 880128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3696696 kB' 'Mapped: 83076 kB' 'AnonPages: 175408 kB' 'Shmem: 2405824 kB' 'KernelStack: 9656 kB' 'PageTables: 3508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 396216 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 287540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.667 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:51.668 node0=512 expecting 512 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:51.668 node1=512 expecting 512 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:51.668 00:02:51.668 real 0m3.456s 00:02:51.668 user 0m1.241s 00:02:51.668 sys 0m2.240s 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:51.668 10:54:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:51.668 ************************************ 00:02:51.668 END TEST per_node_1G_alloc 00:02:51.668 ************************************ 00:02:51.928 10:54:48 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:51.928 10:54:48 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:51.928 10:54:48 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:51.929 10:54:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.929 ************************************ 00:02:51.929 START TEST even_2G_alloc 00:02:51.929 ************************************ 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.929 10:54:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:55.219 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:55.219 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42797548 kB' 'MemAvailable: 44443152 kB' 'Buffers: 2716 kB' 'Cached: 11259648 kB' 'SwapCached: 20048 kB' 'Active: 6968416 kB' 'Inactive: 4909708 kB' 'Active(anon): 6513836 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598560 kB' 'Mapped: 195428 kB' 'Shmem: 9124744 kB' 'KReclaimable: 309868 kB' 'Slab: 924128 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614260 kB' 'KernelStack: 21936 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11094956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.219 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42798844 kB' 'MemAvailable: 44444448 kB' 'Buffers: 2716 kB' 'Cached: 11259652 kB' 'SwapCached: 20048 kB' 'Active: 6967684 kB' 'Inactive: 4909708 kB' 'Active(anon): 6513104 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598352 kB' 'Mapped: 195336 kB' 'Shmem: 9124748 kB' 'KReclaimable: 309868 kB' 'Slab: 924120 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614252 kB' 'KernelStack: 21952 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11094976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.220 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.221 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42799604 kB' 'MemAvailable: 44445208 kB' 'Buffers: 2716 kB' 'Cached: 11259668 kB' 'SwapCached: 20048 kB' 'Active: 6968116 kB' 'Inactive: 4909708 kB' 'Active(anon): 6513536 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598732 kB' 'Mapped: 195840 kB' 'Shmem: 9124764 kB' 'KReclaimable: 309868 kB' 'Slab: 924120 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614252 kB' 'KernelStack: 21936 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11096088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216392 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.222 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.485 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.486 nr_hugepages=1024 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.486 resv_hugepages=0 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.486 surplus_hugepages=0 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.486 anon_hugepages=0 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42792044 kB' 'MemAvailable: 44437648 kB' 'Buffers: 2716 kB' 'Cached: 11259692 kB' 'SwapCached: 20048 kB' 'Active: 6972816 kB' 'Inactive: 4909708 kB' 'Active(anon): 6518236 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603912 kB' 'Mapped: 195840 kB' 'Shmem: 9124788 kB' 'KReclaimable: 309868 kB' 'Slab: 924120 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614252 kB' 'KernelStack: 21952 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11101140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216396 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.486 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.487 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21827348 kB' 'MemUsed: 10811792 kB' 'SwapCached: 17412 kB' 'Active: 3983416 kB' 'Inactive: 4022624 kB' 'Active(anon): 3936944 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585672 kB' 'Mapped: 112260 kB' 'AnonPages: 423516 kB' 'Shmem: 6718876 kB' 'KernelStack: 12248 kB' 'PageTables: 4832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201192 kB' 'Slab: 527136 kB' 'SReclaimable: 201192 kB' 'SUnreclaim: 325944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.488 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:55.489 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20969588 kB' 'MemUsed: 6686492 kB' 'SwapCached: 2636 kB' 'Active: 2986784 kB' 'Inactive: 887084 kB' 'Active(anon): 2578676 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 880128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3696840 kB' 'Mapped: 83580 kB' 'AnonPages: 177240 kB' 'Shmem: 2405968 kB' 'KernelStack: 9688 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 396984 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 288308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.490 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:55.491 node0=512 expecting 512 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:55.491 node1=512 expecting 512 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:55.491 00:02:55.491 real 0m3.617s 00:02:55.491 user 0m1.362s 00:02:55.491 sys 0m2.318s 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:55.491 10:54:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:55.491 ************************************ 00:02:55.491 END TEST even_2G_alloc 00:02:55.491 ************************************ 00:02:55.491 10:54:52 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:55.491 10:54:52 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:55.491 10:54:52 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:55.491 10:54:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.491 ************************************ 00:02:55.491 START TEST odd_alloc 00:02:55.491 ************************************ 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.491 10:54:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:58.782 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.782 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.782 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:58.782 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:58.782 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42828116 kB' 'MemAvailable: 44473720 kB' 'Buffers: 2716 kB' 'Cached: 11259812 kB' 'SwapCached: 20048 kB' 'Active: 6974208 kB' 'Inactive: 4909708 kB' 'Active(anon): 6519628 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604696 kB' 'Mapped: 195972 kB' 'Shmem: 9124908 kB' 'KReclaimable: 309868 kB' 'Slab: 923964 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614096 kB' 'KernelStack: 21936 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11104508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216476 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.783 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42833844 kB' 'MemAvailable: 44479448 kB' 'Buffers: 2716 kB' 'Cached: 11259816 kB' 'SwapCached: 20048 kB' 'Active: 6969236 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514656 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599584 kB' 'Mapped: 195776 kB' 'Shmem: 9124912 kB' 'KReclaimable: 309868 kB' 'Slab: 923976 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614108 kB' 'KernelStack: 22080 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11098160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.784 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.784 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.784 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.784 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.785 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42834400 kB' 'MemAvailable: 44480004 kB' 'Buffers: 2716 kB' 'Cached: 11259832 kB' 'SwapCached: 20048 kB' 'Active: 6970732 kB' 'Inactive: 4909708 kB' 'Active(anon): 6516152 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600668 kB' 'Mapped: 195364 kB' 'Shmem: 9124928 kB' 'KReclaimable: 309868 kB' 'Slab: 923968 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614100 kB' 'KernelStack: 22032 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11109448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.786 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.787 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.050 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:59.051 nr_hugepages=1025 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.051 resv_hugepages=0 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.051 surplus_hugepages=0 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.051 anon_hugepages=0 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42844336 kB' 'MemAvailable: 44489940 kB' 'Buffers: 2716 kB' 'Cached: 11259856 kB' 'SwapCached: 20048 kB' 'Active: 6968964 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514384 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599256 kB' 'Mapped: 195364 kB' 'Shmem: 9124952 kB' 'KReclaimable: 309868 kB' 'Slab: 923680 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 613812 kB' 'KernelStack: 22128 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11097836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216680 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.051 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.052 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21837004 kB' 'MemUsed: 10802136 kB' 'SwapCached: 17412 kB' 'Active: 3982764 kB' 'Inactive: 4022624 kB' 'Active(anon): 3936292 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585700 kB' 'Mapped: 112272 kB' 'AnonPages: 422824 kB' 'Shmem: 6718904 kB' 'KernelStack: 12536 kB' 'PageTables: 5508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201192 kB' 'Slab: 526596 kB' 'SReclaimable: 201192 kB' 'SUnreclaim: 325404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.053 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21004600 kB' 'MemUsed: 6651480 kB' 'SwapCached: 2636 kB' 'Active: 2986764 kB' 'Inactive: 887084 kB' 'Active(anon): 2578656 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 880128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3696976 kB' 'Mapped: 83100 kB' 'AnonPages: 176976 kB' 'Shmem: 2406104 kB' 'KernelStack: 9656 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 397076 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 288400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.054 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:59.055 node0=512 expecting 513 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:59.055 node1=513 expecting 512 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:59.055 00:02:59.055 real 0m3.480s 00:02:59.055 user 0m1.278s 00:02:59.055 sys 0m2.237s 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:59.055 10:54:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.055 ************************************ 00:02:59.055 END TEST odd_alloc 00:02:59.055 ************************************ 00:02:59.055 10:54:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:59.055 10:54:56 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:59.055 10:54:56 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:59.055 10:54:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.055 ************************************ 00:02:59.055 START TEST custom_alloc 00:02:59.055 ************************************ 00:02:59.055 10:54:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:02:59.055 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.056 10:54:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:02.350 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.350 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.614 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41810864 kB' 'MemAvailable: 43456468 kB' 'Buffers: 2716 kB' 'Cached: 11259984 kB' 'SwapCached: 20048 kB' 'Active: 6969488 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514908 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599368 kB' 'Mapped: 195452 kB' 'Shmem: 9125080 kB' 'KReclaimable: 309868 kB' 'Slab: 924204 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614336 kB' 'KernelStack: 21968 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11096284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.614 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.615 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41813100 kB' 'MemAvailable: 43458704 kB' 'Buffers: 2716 kB' 'Cached: 11259988 kB' 'SwapCached: 20048 kB' 'Active: 6969224 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514644 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599556 kB' 'Mapped: 195352 kB' 'Shmem: 9125084 kB' 'KReclaimable: 309868 kB' 'Slab: 924212 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614344 kB' 'KernelStack: 21952 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11096304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.616 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.617 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41812344 kB' 'MemAvailable: 43457948 kB' 'Buffers: 2716 kB' 'Cached: 11259988 kB' 'SwapCached: 20048 kB' 'Active: 6969264 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514684 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599592 kB' 'Mapped: 195352 kB' 'Shmem: 9125084 kB' 'KReclaimable: 309868 kB' 'Slab: 924212 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614344 kB' 'KernelStack: 21968 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11096324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.618 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:02.619 nr_hugepages=1536 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.619 resv_hugepages=0 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.619 surplus_hugepages=0 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.619 anon_hugepages=0 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.619 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41813048 kB' 'MemAvailable: 43458652 kB' 'Buffers: 2716 kB' 'Cached: 11260044 kB' 'SwapCached: 20048 kB' 'Active: 6968888 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514308 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599160 kB' 'Mapped: 195352 kB' 'Shmem: 9125140 kB' 'KReclaimable: 309868 kB' 'Slab: 924212 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614344 kB' 'KernelStack: 21936 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11096344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.620 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.882 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.883 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21850516 kB' 'MemUsed: 10788624 kB' 'SwapCached: 17412 kB' 'Active: 3982560 kB' 'Inactive: 4022624 kB' 'Active(anon): 3936088 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585712 kB' 'Mapped: 112276 kB' 'AnonPages: 422772 kB' 'Shmem: 6718916 kB' 'KernelStack: 12312 kB' 'PageTables: 5064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201192 kB' 'Slab: 527268 kB' 'SReclaimable: 201192 kB' 'SUnreclaim: 326076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.884 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 19964872 kB' 'MemUsed: 7691208 kB' 'SwapCached: 2636 kB' 'Active: 2986700 kB' 'Inactive: 887084 kB' 'Active(anon): 2578592 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 880128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3697120 kB' 'Mapped: 83076 kB' 'AnonPages: 176792 kB' 'Shmem: 2406248 kB' 'KernelStack: 9640 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 396944 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 288268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.885 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.886 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.887 node0=512 expecting 512 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:02.887 node1=1024 expecting 1024 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:02.887 00:03:02.887 real 0m3.690s 00:03:02.887 user 0m1.340s 00:03:02.887 sys 0m2.411s 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:02.887 10:54:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:02.887 ************************************ 00:03:02.887 END TEST custom_alloc 00:03:02.887 ************************************ 00:03:02.887 10:54:59 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:02.887 10:54:59 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:02.887 10:54:59 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:02.887 10:54:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:02.887 ************************************ 00:03:02.887 START TEST no_shrink_alloc 00:03:02.887 ************************************ 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.887 10:55:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:06.180 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.180 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42865584 kB' 'MemAvailable: 44511188 kB' 'Buffers: 2716 kB' 'Cached: 11260148 kB' 'SwapCached: 20048 kB' 'Active: 6969700 kB' 'Inactive: 4909708 kB' 'Active(anon): 6515120 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599344 kB' 'Mapped: 195492 kB' 'Shmem: 9125244 kB' 'KReclaimable: 309868 kB' 'Slab: 924340 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614472 kB' 'KernelStack: 21968 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11097280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.445 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42865328 kB' 'MemAvailable: 44510932 kB' 'Buffers: 2716 kB' 'Cached: 11260148 kB' 'SwapCached: 20048 kB' 'Active: 6969268 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514688 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599276 kB' 'Mapped: 195364 kB' 'Shmem: 9125244 kB' 'KReclaimable: 309868 kB' 'Slab: 924336 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614468 kB' 'KernelStack: 21952 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11097296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.447 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.448 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42869992 kB' 'MemAvailable: 44515596 kB' 'Buffers: 2716 kB' 'Cached: 11260176 kB' 'SwapCached: 20048 kB' 'Active: 6969388 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514808 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599428 kB' 'Mapped: 195364 kB' 'Shmem: 9125272 kB' 'KReclaimable: 309868 kB' 'Slab: 924336 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614468 kB' 'KernelStack: 21920 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11098444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.451 nr_hugepages=1024 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.451 resv_hugepages=0 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.451 surplus_hugepages=0 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.451 anon_hugepages=0 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42870208 kB' 'MemAvailable: 44515812 kB' 'Buffers: 2716 kB' 'Cached: 11260176 kB' 'SwapCached: 20048 kB' 'Active: 6969168 kB' 'Inactive: 4909708 kB' 'Active(anon): 6514588 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599268 kB' 'Mapped: 195388 kB' 'Shmem: 9125272 kB' 'KReclaimable: 309868 kB' 'Slab: 924336 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 614468 kB' 'KernelStack: 21952 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11098700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.452 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20820764 kB' 'MemUsed: 11818376 kB' 'SwapCached: 17412 kB' 'Active: 3982984 kB' 'Inactive: 4022624 kB' 'Active(anon): 3936512 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585756 kB' 'Mapped: 112288 kB' 'AnonPages: 422968 kB' 'Shmem: 6718960 kB' 'KernelStack: 12280 kB' 'PageTables: 4932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201192 kB' 'Slab: 527400 kB' 'SReclaimable: 201192 kB' 'SUnreclaim: 326208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.454 node0=1024 expecting 1024 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.454 10:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:09.741 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.741 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.741 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42865556 kB' 'MemAvailable: 44511160 kB' 'Buffers: 2716 kB' 'Cached: 11260288 kB' 'SwapCached: 20048 kB' 'Active: 6970308 kB' 'Inactive: 4909708 kB' 'Active(anon): 6515728 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599672 kB' 'Mapped: 195480 kB' 'Shmem: 9125384 kB' 'KReclaimable: 309868 kB' 'Slab: 923652 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 613784 kB' 'KernelStack: 22016 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11098080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.009 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.010 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42865868 kB' 'MemAvailable: 44511472 kB' 'Buffers: 2716 kB' 'Cached: 11260292 kB' 'SwapCached: 20048 kB' 'Active: 6969924 kB' 'Inactive: 4909708 kB' 'Active(anon): 6515344 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599816 kB' 'Mapped: 195372 kB' 'Shmem: 9125388 kB' 'KReclaimable: 309868 kB' 'Slab: 923620 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 613752 kB' 'KernelStack: 21952 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11098096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42866120 kB' 'MemAvailable: 44511724 kB' 'Buffers: 2716 kB' 'Cached: 11260312 kB' 'SwapCached: 20048 kB' 'Active: 6969948 kB' 'Inactive: 4909708 kB' 'Active(anon): 6515368 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599820 kB' 'Mapped: 195372 kB' 'Shmem: 9125408 kB' 'KReclaimable: 309868 kB' 'Slab: 923620 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 613752 kB' 'KernelStack: 21952 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11098120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.014 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.015 nr_hugepages=1024 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.015 resv_hugepages=0 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.015 surplus_hugepages=0 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.015 anon_hugepages=0 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42866120 kB' 'MemAvailable: 44511724 kB' 'Buffers: 2716 kB' 'Cached: 11260332 kB' 'SwapCached: 20048 kB' 'Active: 6969956 kB' 'Inactive: 4909708 kB' 'Active(anon): 6515376 kB' 'Inactive(anon): 3226668 kB' 'Active(file): 454580 kB' 'Inactive(file): 1683040 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599812 kB' 'Mapped: 195372 kB' 'Shmem: 9125428 kB' 'KReclaimable: 309868 kB' 'Slab: 923620 kB' 'SReclaimable: 309868 kB' 'SUnreclaim: 613752 kB' 'KernelStack: 21952 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11098140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20833912 kB' 'MemUsed: 11805228 kB' 'SwapCached: 17412 kB' 'Active: 3983844 kB' 'Inactive: 4022624 kB' 'Active(anon): 3937372 kB' 'Inactive(anon): 3219712 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7585792 kB' 'Mapped: 112296 kB' 'AnonPages: 423868 kB' 'Shmem: 6718996 kB' 'KernelStack: 12296 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201192 kB' 'Slab: 526848 kB' 'SReclaimable: 201192 kB' 'SUnreclaim: 325656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.017 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.018 node0=1024 expecting 1024 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.018 00:03:10.018 real 0m7.181s 00:03:10.018 user 0m2.670s 00:03:10.018 sys 0m4.603s 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:10.018 10:55:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.018 ************************************ 00:03:10.018 END TEST no_shrink_alloc 00:03:10.018 ************************************ 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.345 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.346 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.346 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.346 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.346 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:10.346 10:55:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:10.346 00:03:10.346 real 0m27.279s 00:03:10.346 user 0m9.520s 00:03:10.346 sys 0m16.640s 00:03:10.346 10:55:07 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:10.346 10:55:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.346 ************************************ 00:03:10.346 END TEST hugepages 00:03:10.346 ************************************ 00:03:10.346 10:55:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:10.346 10:55:07 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:10.346 10:55:07 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:10.346 10:55:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.346 ************************************ 00:03:10.346 START TEST driver 00:03:10.346 ************************************ 00:03:10.346 10:55:07 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:10.346 * Looking for test storage... 00:03:10.346 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:10.346 10:55:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:10.346 10:55:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.346 10:55:07 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.643 10:55:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:15.643 10:55:12 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:15.643 10:55:12 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:15.643 10:55:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:15.643 ************************************ 00:03:15.643 START TEST guess_driver 00:03:15.643 ************************************ 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:15.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:15.643 Looking for driver=vfio-pci 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.643 10:55:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.180 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.439 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.440 10:55:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.356 10:55:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.627 00:03:25.627 real 0m9.806s 00:03:25.627 user 0m2.571s 00:03:25.627 sys 0m4.944s 00:03:25.627 10:55:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:25.627 10:55:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:25.627 ************************************ 00:03:25.627 END TEST guess_driver 00:03:25.627 ************************************ 00:03:25.627 00:03:25.627 real 0m14.552s 00:03:25.627 user 0m3.843s 00:03:25.627 sys 0m7.631s 00:03:25.627 10:55:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:25.627 10:55:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:25.627 ************************************ 00:03:25.627 END TEST driver 00:03:25.627 ************************************ 00:03:25.627 10:55:21 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:25.627 10:55:21 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:25.627 10:55:21 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:25.627 10:55:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.627 ************************************ 00:03:25.627 START TEST devices 00:03:25.627 ************************************ 00:03:25.627 10:55:22 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:25.627 * Looking for test storage... 00:03:25.627 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:25.627 10:55:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:25.627 10:55:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:25.627 10:55:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.627 10:55:22 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.917 10:55:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:28.917 10:55:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:28.917 10:55:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:28.917 10:55:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:28.918 No valid GPT data, bailing 00:03:28.918 10:55:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.918 10:55:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:28.918 10:55:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:28.918 10:55:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:28.918 10:55:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:28.918 10:55:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:28.918 10:55:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:28.918 10:55:25 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:28.918 10:55:25 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:28.918 10:55:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:28.918 ************************************ 00:03:28.918 START TEST nvme_mount 00:03:28.918 ************************************ 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:28.918 10:55:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:29.856 Creating new GPT entries in memory. 00:03:29.856 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:29.856 other utilities. 00:03:29.856 10:55:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:29.856 10:55:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.856 10:55:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:29.856 10:55:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:29.856 10:55:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:30.794 Creating new GPT entries in memory. 00:03:30.794 The operation has completed successfully. 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1361870 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.794 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:30.795 10:55:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:30.795 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.795 10:55:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.087 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:34.088 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.088 10:55:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:34.088 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:34.088 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:34.088 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:34.088 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.088 10:55:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.380 10:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.674 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.674 00:03:40.674 real 0m11.846s 00:03:40.674 user 0m3.318s 00:03:40.674 sys 0m6.375s 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:40.674 10:55:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:40.674 ************************************ 00:03:40.674 END TEST nvme_mount 00:03:40.674 ************************************ 00:03:40.674 10:55:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:40.674 10:55:37 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:40.674 10:55:37 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:40.674 10:55:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.674 ************************************ 00:03:40.674 START TEST dm_mount 00:03:40.674 ************************************ 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:40.674 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.675 10:55:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:41.612 Creating new GPT entries in memory. 00:03:41.612 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.612 other utilities. 00:03:41.612 10:55:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.612 10:55:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.612 10:55:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.612 10:55:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.612 10:55:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:42.549 Creating new GPT entries in memory. 00:03:42.549 The operation has completed successfully. 00:03:42.549 10:55:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.549 10:55:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.549 10:55:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.549 10:55:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.549 10:55:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:43.928 The operation has completed successfully. 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1366117 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.928 10:55:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:47.219 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.219 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:47.220 10:55:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.220 10:55:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.757 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:49.758 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:49.758 00:03:49.758 real 0m9.145s 00:03:49.758 user 0m2.031s 00:03:49.758 sys 0m4.108s 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:49.758 10:55:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:49.758 ************************************ 00:03:49.758 END TEST dm_mount 00:03:49.758 ************************************ 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.758 10:55:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.017 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.017 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.017 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.017 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.017 10:55:47 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.017 00:03:50.017 real 0m25.182s 00:03:50.017 user 0m6.754s 00:03:50.017 sys 0m13.168s 00:03:50.018 10:55:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:50.018 10:55:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 ************************************ 00:03:50.018 END TEST devices 00:03:50.018 ************************************ 00:03:50.018 00:03:50.018 real 1m31.300s 00:03:50.018 user 0m27.805s 00:03:50.018 sys 0m52.176s 00:03:50.018 10:55:47 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:50.018 10:55:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 ************************************ 00:03:50.018 END TEST setup.sh 00:03:50.018 ************************************ 00:03:50.018 10:55:47 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:53.384 Hugepages 00:03:53.384 node hugesize free / total 00:03:53.384 node0 1048576kB 0 / 0 00:03:53.384 node0 2048kB 2048 / 2048 00:03:53.384 node1 1048576kB 0 / 0 00:03:53.384 node1 2048kB 0 / 0 00:03:53.384 00:03:53.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.384 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:53.384 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:53.384 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.384 10:55:50 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.384 10:55:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.384 10:55:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.384 10:55:50 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:56.672 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.672 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.932 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.932 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.932 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.932 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.312 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.571 10:55:55 -- common/autotest_common.sh@1529 -- # sleep 1 00:03:59.507 10:55:56 -- common/autotest_common.sh@1530 -- # bdfs=() 00:03:59.507 10:55:56 -- common/autotest_common.sh@1530 -- # local bdfs 00:03:59.507 10:55:56 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.507 10:55:56 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:03:59.507 10:55:56 -- common/autotest_common.sh@1510 -- # bdfs=() 00:03:59.507 10:55:56 -- common/autotest_common.sh@1510 -- # local bdfs 00:03:59.507 10:55:56 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.507 10:55:56 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:59.507 10:55:56 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:03:59.767 10:55:56 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:03:59.767 10:55:56 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:03:59.767 10:55:56 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.053 Waiting for block devices as requested 00:04:03.053 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:03.053 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:03.313 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:03.313 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:03.313 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:03.571 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:03.571 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:03.571 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:03.830 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:03.830 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:03.830 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:04.088 10:56:01 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:04.088 10:56:01 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:04.088 10:56:01 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1499 -- # grep 0000:d8:00.0/nvme/nvme 00:04:04.089 10:56:01 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:04.089 10:56:01 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:04.089 10:56:01 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:04.089 10:56:01 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:04.089 10:56:01 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:04:04.089 10:56:01 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:04.089 10:56:01 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:04.089 10:56:01 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:04.089 10:56:01 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:04.089 10:56:01 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:04.089 10:56:01 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:04.089 10:56:01 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:04.089 10:56:01 -- common/autotest_common.sh@1554 -- # continue 00:04:04.089 10:56:01 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:04.089 10:56:01 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:04.089 10:56:01 -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 10:56:01 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:04.089 10:56:01 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:04.089 10:56:01 -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 10:56:01 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:07.374 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:07.374 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:08.752 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.752 10:56:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:08.752 10:56:05 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:08.752 10:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:08.752 10:56:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:08.752 10:56:05 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:08.752 10:56:05 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.752 10:56:05 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:08.752 10:56:05 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:08.752 10:56:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:08.752 10:56:05 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:08.752 10:56:05 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:08.752 10:56:05 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.752 10:56:05 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:08.752 10:56:05 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:09.011 10:56:06 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:09.011 10:56:06 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:09.011 10:56:06 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:09.011 10:56:06 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:09.011 10:56:06 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:04:09.011 10:56:06 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:09.011 10:56:06 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:04:09.011 10:56:06 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:d8:00.0 00:04:09.011 10:56:06 -- common/autotest_common.sh@1589 -- # [[ -z 0000:d8:00.0 ]] 00:04:09.011 10:56:06 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=1375553 00:04:09.011 10:56:06 -- common/autotest_common.sh@1595 -- # waitforlisten 1375553 00:04:09.011 10:56:06 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.011 10:56:06 -- common/autotest_common.sh@828 -- # '[' -z 1375553 ']' 00:04:09.011 10:56:06 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.011 10:56:06 -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:09.011 10:56:06 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.011 10:56:06 -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:09.011 10:56:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.011 [2024-05-15 10:56:06.071169] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:09.011 [2024-05-15 10:56:06.071233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375553 ] 00:04:09.011 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.011 [2024-05-15 10:56:06.139904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.011 [2024-05-15 10:56:06.217972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.948 10:56:06 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:09.948 10:56:06 -- common/autotest_common.sh@861 -- # return 0 00:04:09.948 10:56:06 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:04:09.948 10:56:06 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:04:09.948 10:56:06 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:13.235 nvme0n1 00:04:13.235 10:56:09 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:13.235 [2024-05-15 10:56:10.046590] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:13.235 request: 00:04:13.235 { 00:04:13.235 "nvme_ctrlr_name": "nvme0", 00:04:13.235 "password": "test", 00:04:13.235 "method": "bdev_nvme_opal_revert", 00:04:13.235 "req_id": 1 00:04:13.235 } 00:04:13.235 Got JSON-RPC error response 00:04:13.235 response: 00:04:13.235 { 00:04:13.235 "code": -32602, 00:04:13.235 "message": "Invalid parameters" 00:04:13.235 } 00:04:13.235 10:56:10 -- common/autotest_common.sh@1601 -- # true 00:04:13.235 10:56:10 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:04:13.235 10:56:10 -- common/autotest_common.sh@1605 -- # killprocess 1375553 00:04:13.235 10:56:10 -- common/autotest_common.sh@947 -- # '[' -z 1375553 ']' 00:04:13.235 10:56:10 -- common/autotest_common.sh@951 -- # kill -0 1375553 00:04:13.235 10:56:10 -- common/autotest_common.sh@952 -- # uname 00:04:13.235 10:56:10 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:13.235 10:56:10 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1375553 00:04:13.235 10:56:10 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:13.235 10:56:10 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:13.235 10:56:10 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1375553' 00:04:13.235 killing process with pid 1375553 00:04:13.235 10:56:10 -- common/autotest_common.sh@966 -- # kill 1375553 00:04:13.235 10:56:10 -- common/autotest_common.sh@971 -- # wait 1375553 00:04:15.135 10:56:12 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:15.135 10:56:12 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:15.135 10:56:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.135 10:56:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.135 10:56:12 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:15.135 10:56:12 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:15.135 10:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:15.135 10:56:12 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:15.135 10:56:12 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:15.135 10:56:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:15.135 10:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:15.135 ************************************ 00:04:15.135 START TEST env 00:04:15.135 ************************************ 00:04:15.135 10:56:12 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:15.393 * Looking for test storage... 00:04:15.393 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:15.393 10:56:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.393 10:56:12 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:15.393 10:56:12 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:15.393 10:56:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.393 ************************************ 00:04:15.393 START TEST env_memory 00:04:15.393 ************************************ 00:04:15.393 10:56:12 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.393 00:04:15.393 00:04:15.393 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.393 http://cunit.sourceforge.net/ 00:04:15.393 00:04:15.393 00:04:15.393 Suite: memory 00:04:15.393 Test: alloc and free memory map ...[2024-05-15 10:56:12.556881] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.393 passed 00:04:15.393 Test: mem map translation ...[2024-05-15 10:56:12.570615] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.393 [2024-05-15 10:56:12.570631] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.393 [2024-05-15 10:56:12.570662] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.393 [2024-05-15 10:56:12.570670] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.393 passed 00:04:15.393 Test: mem map registration ...[2024-05-15 10:56:12.592062] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:15.393 [2024-05-15 10:56:12.592078] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:15.393 passed 00:04:15.393 Test: mem map adjacent registrations ...passed 00:04:15.393 00:04:15.393 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.393 suites 1 1 n/a 0 0 00:04:15.393 tests 4 4 4 0 0 00:04:15.393 asserts 152 152 152 0 n/a 00:04:15.393 00:04:15.393 Elapsed time = 0.087 seconds 00:04:15.393 00:04:15.393 real 0m0.099s 00:04:15.393 user 0m0.087s 00:04:15.393 sys 0m0.012s 00:04:15.393 10:56:12 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:15.393 10:56:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.393 ************************************ 00:04:15.393 END TEST env_memory 00:04:15.393 ************************************ 00:04:15.393 10:56:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.393 10:56:12 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:15.393 10:56:12 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:15.652 10:56:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.652 ************************************ 00:04:15.652 START TEST env_vtophys 00:04:15.652 ************************************ 00:04:15.652 10:56:12 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.652 EAL: lib.eal log level changed from notice to debug 00:04:15.652 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.652 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.652 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.652 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.652 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.652 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.652 EAL: Detected lcore 6 as core 6 on socket 0 00:04:15.652 EAL: Detected lcore 7 as core 8 on socket 0 00:04:15.652 EAL: Detected lcore 8 as core 9 on socket 0 00:04:15.652 EAL: Detected lcore 9 as core 10 on socket 0 00:04:15.652 EAL: Detected lcore 10 as core 11 on socket 0 00:04:15.652 EAL: Detected lcore 11 as core 12 on socket 0 00:04:15.652 EAL: Detected lcore 12 as core 13 on socket 0 00:04:15.652 EAL: Detected lcore 13 as core 14 on socket 0 00:04:15.652 EAL: Detected lcore 14 as core 16 on socket 0 00:04:15.652 EAL: Detected lcore 15 as core 17 on socket 0 00:04:15.652 EAL: Detected lcore 16 as core 18 on socket 0 00:04:15.652 EAL: Detected lcore 17 as core 19 on socket 0 00:04:15.652 EAL: Detected lcore 18 as core 20 on socket 0 00:04:15.652 EAL: Detected lcore 19 as core 21 on socket 0 00:04:15.652 EAL: Detected lcore 20 as core 22 on socket 0 00:04:15.652 EAL: Detected lcore 21 as core 24 on socket 0 00:04:15.652 EAL: Detected lcore 22 as core 25 on socket 0 00:04:15.652 EAL: Detected lcore 23 as core 26 on socket 0 00:04:15.652 EAL: Detected lcore 24 as core 27 on socket 0 00:04:15.652 EAL: Detected lcore 25 as core 28 on socket 0 00:04:15.652 EAL: Detected lcore 26 as core 29 on socket 0 00:04:15.652 EAL: Detected lcore 27 as core 30 on socket 0 00:04:15.652 EAL: Detected lcore 28 as core 0 on socket 1 00:04:15.652 EAL: Detected lcore 29 as core 1 on socket 1 00:04:15.652 EAL: Detected lcore 30 as core 2 on socket 1 00:04:15.652 EAL: Detected lcore 31 as core 3 on socket 1 00:04:15.652 EAL: Detected lcore 32 as core 4 on socket 1 00:04:15.652 EAL: Detected lcore 33 as core 5 on socket 1 00:04:15.652 EAL: Detected lcore 34 as core 6 on socket 1 00:04:15.652 EAL: Detected lcore 35 as core 8 on socket 1 00:04:15.652 EAL: Detected lcore 36 as core 9 on socket 1 00:04:15.652 EAL: Detected lcore 37 as core 10 on socket 1 00:04:15.652 EAL: Detected lcore 38 as core 11 on socket 1 00:04:15.652 EAL: Detected lcore 39 as core 12 on socket 1 00:04:15.652 EAL: Detected lcore 40 as core 13 on socket 1 00:04:15.652 EAL: Detected lcore 41 as core 14 on socket 1 00:04:15.652 EAL: Detected lcore 42 as core 16 on socket 1 00:04:15.652 EAL: Detected lcore 43 as core 17 on socket 1 00:04:15.652 EAL: Detected lcore 44 as core 18 on socket 1 00:04:15.652 EAL: Detected lcore 45 as core 19 on socket 1 00:04:15.652 EAL: Detected lcore 46 as core 20 on socket 1 00:04:15.652 EAL: Detected lcore 47 as core 21 on socket 1 00:04:15.652 EAL: Detected lcore 48 as core 22 on socket 1 00:04:15.652 EAL: Detected lcore 49 as core 24 on socket 1 00:04:15.652 EAL: Detected lcore 50 as core 25 on socket 1 00:04:15.652 EAL: Detected lcore 51 as core 26 on socket 1 00:04:15.652 EAL: Detected lcore 52 as core 27 on socket 1 00:04:15.652 EAL: Detected lcore 53 as core 28 on socket 1 00:04:15.652 EAL: Detected lcore 54 as core 29 on socket 1 00:04:15.652 EAL: Detected lcore 55 as core 30 on socket 1 00:04:15.652 EAL: Detected lcore 56 as core 0 on socket 0 00:04:15.652 EAL: Detected lcore 57 as core 1 on socket 0 00:04:15.652 EAL: Detected lcore 58 as core 2 on socket 0 00:04:15.652 EAL: Detected lcore 59 as core 3 on socket 0 00:04:15.652 EAL: Detected lcore 60 as core 4 on socket 0 00:04:15.652 EAL: Detected lcore 61 as core 5 on socket 0 00:04:15.652 EAL: Detected lcore 62 as core 6 on socket 0 00:04:15.652 EAL: Detected lcore 63 as core 8 on socket 0 00:04:15.652 EAL: Detected lcore 64 as core 9 on socket 0 00:04:15.652 EAL: Detected lcore 65 as core 10 on socket 0 00:04:15.652 EAL: Detected lcore 66 as core 11 on socket 0 00:04:15.652 EAL: Detected lcore 67 as core 12 on socket 0 00:04:15.652 EAL: Detected lcore 68 as core 13 on socket 0 00:04:15.652 EAL: Detected lcore 69 as core 14 on socket 0 00:04:15.652 EAL: Detected lcore 70 as core 16 on socket 0 00:04:15.652 EAL: Detected lcore 71 as core 17 on socket 0 00:04:15.652 EAL: Detected lcore 72 as core 18 on socket 0 00:04:15.652 EAL: Detected lcore 73 as core 19 on socket 0 00:04:15.652 EAL: Detected lcore 74 as core 20 on socket 0 00:04:15.652 EAL: Detected lcore 75 as core 21 on socket 0 00:04:15.652 EAL: Detected lcore 76 as core 22 on socket 0 00:04:15.652 EAL: Detected lcore 77 as core 24 on socket 0 00:04:15.652 EAL: Detected lcore 78 as core 25 on socket 0 00:04:15.652 EAL: Detected lcore 79 as core 26 on socket 0 00:04:15.652 EAL: Detected lcore 80 as core 27 on socket 0 00:04:15.652 EAL: Detected lcore 81 as core 28 on socket 0 00:04:15.652 EAL: Detected lcore 82 as core 29 on socket 0 00:04:15.652 EAL: Detected lcore 83 as core 30 on socket 0 00:04:15.652 EAL: Detected lcore 84 as core 0 on socket 1 00:04:15.652 EAL: Detected lcore 85 as core 1 on socket 1 00:04:15.652 EAL: Detected lcore 86 as core 2 on socket 1 00:04:15.652 EAL: Detected lcore 87 as core 3 on socket 1 00:04:15.652 EAL: Detected lcore 88 as core 4 on socket 1 00:04:15.652 EAL: Detected lcore 89 as core 5 on socket 1 00:04:15.652 EAL: Detected lcore 90 as core 6 on socket 1 00:04:15.652 EAL: Detected lcore 91 as core 8 on socket 1 00:04:15.652 EAL: Detected lcore 92 as core 9 on socket 1 00:04:15.652 EAL: Detected lcore 93 as core 10 on socket 1 00:04:15.652 EAL: Detected lcore 94 as core 11 on socket 1 00:04:15.652 EAL: Detected lcore 95 as core 12 on socket 1 00:04:15.652 EAL: Detected lcore 96 as core 13 on socket 1 00:04:15.652 EAL: Detected lcore 97 as core 14 on socket 1 00:04:15.652 EAL: Detected lcore 98 as core 16 on socket 1 00:04:15.652 EAL: Detected lcore 99 as core 17 on socket 1 00:04:15.652 EAL: Detected lcore 100 as core 18 on socket 1 00:04:15.652 EAL: Detected lcore 101 as core 19 on socket 1 00:04:15.653 EAL: Detected lcore 102 as core 20 on socket 1 00:04:15.653 EAL: Detected lcore 103 as core 21 on socket 1 00:04:15.653 EAL: Detected lcore 104 as core 22 on socket 1 00:04:15.653 EAL: Detected lcore 105 as core 24 on socket 1 00:04:15.653 EAL: Detected lcore 106 as core 25 on socket 1 00:04:15.653 EAL: Detected lcore 107 as core 26 on socket 1 00:04:15.653 EAL: Detected lcore 108 as core 27 on socket 1 00:04:15.653 EAL: Detected lcore 109 as core 28 on socket 1 00:04:15.653 EAL: Detected lcore 110 as core 29 on socket 1 00:04:15.653 EAL: Detected lcore 111 as core 30 on socket 1 00:04:15.653 EAL: Maximum logical cores by configuration: 128 00:04:15.653 EAL: Detected CPU lcores: 112 00:04:15.653 EAL: Detected NUMA nodes: 2 00:04:15.653 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:15.653 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:15.653 EAL: Checking presence of .so 'librte_eal.so' 00:04:15.653 EAL: Detected static linkage of DPDK 00:04:15.653 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.653 EAL: Bus pci wants IOVA as 'DC' 00:04:15.653 EAL: Buses did not request a specific IOVA mode. 00:04:15.653 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.653 EAL: Selected IOVA mode 'VA' 00:04:15.653 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.653 EAL: Probing VFIO support... 00:04:15.653 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.653 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.653 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.653 EAL: VFIO support initialized 00:04:15.653 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.653 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.653 EAL: Setting up physically contiguous memory... 00:04:15.653 EAL: Setting maximum number of open files to 524288 00:04:15.653 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.653 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.653 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.653 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.653 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.653 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.653 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.653 EAL: Hugepages will be freed exactly as allocated. 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: TSC frequency is ~2500000 KHz 00:04:15.653 EAL: Main lcore 0 is ready (tid=7f4f627f6a00;cpuset=[0]) 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 0 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.653 00:04:15.653 00:04:15.653 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.653 http://cunit.sourceforge.net/ 00:04:15.653 00:04:15.653 00:04:15.653 Suite: components_suite 00:04:15.653 Test: vtophys_malloc_test ...passed 00:04:15.653 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.653 EAL: Restoring previous memory policy: 4 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.653 EAL: request: mp_malloc_sync 00:04:15.653 EAL: No shared files mode enabled, IPC is disabled 00:04:15.653 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.653 EAL: Trying to obtain current memory policy. 00:04:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.912 EAL: Restoring previous memory policy: 4 00:04:15.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.912 EAL: request: mp_malloc_sync 00:04:15.912 EAL: No shared files mode enabled, IPC is disabled 00:04:15.912 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.912 EAL: request: mp_malloc_sync 00:04:15.912 EAL: No shared files mode enabled, IPC is disabled 00:04:15.912 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.912 EAL: Trying to obtain current memory policy. 00:04:15.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.912 EAL: Restoring previous memory policy: 4 00:04:15.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.912 EAL: request: mp_malloc_sync 00:04:15.912 EAL: No shared files mode enabled, IPC is disabled 00:04:15.912 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.169 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.170 EAL: request: mp_malloc_sync 00:04:16.170 EAL: No shared files mode enabled, IPC is disabled 00:04:16.170 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.170 EAL: Trying to obtain current memory policy. 00:04:16.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.428 EAL: Restoring previous memory policy: 4 00:04:16.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.428 EAL: request: mp_malloc_sync 00:04:16.428 EAL: No shared files mode enabled, IPC is disabled 00:04:16.428 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.687 EAL: request: mp_malloc_sync 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:16.687 passed 00:04:16.687 00:04:16.687 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.687 suites 1 1 n/a 0 0 00:04:16.687 tests 2 2 2 0 0 00:04:16.687 asserts 497 497 497 0 n/a 00:04:16.687 00:04:16.687 Elapsed time = 0.961 seconds 00:04:16.687 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.687 EAL: request: mp_malloc_sync 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: Heap on socket 0 was shrunk by 2MB 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 00:04:16.687 real 0m1.082s 00:04:16.687 user 0m0.630s 00:04:16.687 sys 0m0.427s 00:04:16.687 10:56:13 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:16.687 10:56:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:16.687 ************************************ 00:04:16.687 END TEST env_vtophys 00:04:16.687 ************************************ 00:04:16.687 10:56:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.687 10:56:13 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:16.687 10:56:13 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.687 10:56:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.687 ************************************ 00:04:16.687 START TEST env_pci 00:04:16.687 ************************************ 00:04:16.687 10:56:13 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.687 00:04:16.687 00:04:16.687 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.687 http://cunit.sourceforge.net/ 00:04:16.687 00:04:16.687 00:04:16.687 Suite: pci 00:04:16.687 Test: pci_hook ...[2024-05-15 10:56:13.887667] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1377008 has claimed it 00:04:16.687 EAL: Cannot find device (10000:00:01.0) 00:04:16.687 EAL: Failed to attach device on primary process 00:04:16.687 passed 00:04:16.687 00:04:16.687 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.687 suites 1 1 n/a 0 0 00:04:16.687 tests 1 1 1 0 0 00:04:16.687 asserts 25 25 25 0 n/a 00:04:16.687 00:04:16.687 Elapsed time = 0.036 seconds 00:04:16.687 00:04:16.687 real 0m0.055s 00:04:16.687 user 0m0.012s 00:04:16.687 sys 0m0.043s 00:04:16.687 10:56:13 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:16.687 10:56:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:16.687 ************************************ 00:04:16.687 END TEST env_pci 00:04:16.687 ************************************ 00:04:16.945 10:56:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.945 10:56:13 env -- env/env.sh@15 -- # uname 00:04:16.945 10:56:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.945 10:56:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.945 10:56:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.945 10:56:13 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:16.945 10:56:13 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.945 10:56:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.945 ************************************ 00:04:16.945 START TEST env_dpdk_post_init 00:04:16.945 ************************************ 00:04:16.945 10:56:14 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.945 EAL: Detected CPU lcores: 112 00:04:16.945 EAL: Detected NUMA nodes: 2 00:04:16.945 EAL: Detected static linkage of DPDK 00:04:16.945 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.945 EAL: Selected IOVA mode 'VA' 00:04:16.945 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.945 EAL: VFIO support initialized 00:04:16.945 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.945 EAL: Using IOMMU type 1 (Type 1) 00:04:17.880 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:21.162 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:21.162 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:21.729 Starting DPDK initialization... 00:04:21.729 Starting SPDK post initialization... 00:04:21.729 SPDK NVMe probe 00:04:21.729 Attaching to 0000:d8:00.0 00:04:21.729 Attached to 0000:d8:00.0 00:04:21.729 Cleaning up... 00:04:21.729 00:04:21.729 real 0m4.764s 00:04:21.729 user 0m3.581s 00:04:21.729 sys 0m0.427s 00:04:21.729 10:56:18 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:21.729 10:56:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.729 ************************************ 00:04:21.729 END TEST env_dpdk_post_init 00:04:21.729 ************************************ 00:04:21.729 10:56:18 env -- env/env.sh@26 -- # uname 00:04:21.729 10:56:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:21.729 10:56:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.729 10:56:18 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:21.729 10:56:18 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:21.729 10:56:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.729 ************************************ 00:04:21.729 START TEST env_mem_callbacks 00:04:21.729 ************************************ 00:04:21.729 10:56:18 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.729 EAL: Detected CPU lcores: 112 00:04:21.729 EAL: Detected NUMA nodes: 2 00:04:21.729 EAL: Detected static linkage of DPDK 00:04:21.729 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.729 EAL: Selected IOVA mode 'VA' 00:04:21.729 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.729 EAL: VFIO support initialized 00:04:21.729 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.729 00:04:21.729 00:04:21.729 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.729 http://cunit.sourceforge.net/ 00:04:21.729 00:04:21.729 00:04:21.729 Suite: memory 00:04:21.729 Test: test ... 00:04:21.729 register 0x200000200000 2097152 00:04:21.729 malloc 3145728 00:04:21.729 register 0x200000400000 4194304 00:04:21.729 buf 0x200000500000 len 3145728 PASSED 00:04:21.729 malloc 64 00:04:21.729 buf 0x2000004fff40 len 64 PASSED 00:04:21.729 malloc 4194304 00:04:21.729 register 0x200000800000 6291456 00:04:21.729 buf 0x200000a00000 len 4194304 PASSED 00:04:21.729 free 0x200000500000 3145728 00:04:21.729 free 0x2000004fff40 64 00:04:21.729 unregister 0x200000400000 4194304 PASSED 00:04:21.729 free 0x200000a00000 4194304 00:04:21.729 unregister 0x200000800000 6291456 PASSED 00:04:21.729 malloc 8388608 00:04:21.729 register 0x200000400000 10485760 00:04:21.729 buf 0x200000600000 len 8388608 PASSED 00:04:21.729 free 0x200000600000 8388608 00:04:21.729 unregister 0x200000400000 10485760 PASSED 00:04:21.729 passed 00:04:21.729 00:04:21.729 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.729 suites 1 1 n/a 0 0 00:04:21.729 tests 1 1 1 0 0 00:04:21.729 asserts 15 15 15 0 n/a 00:04:21.729 00:04:21.729 Elapsed time = 0.005 seconds 00:04:21.729 00:04:21.729 real 0m0.059s 00:04:21.729 user 0m0.018s 00:04:21.729 sys 0m0.041s 00:04:21.729 10:56:18 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:21.729 10:56:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:21.729 ************************************ 00:04:21.729 END TEST env_mem_callbacks 00:04:21.729 ************************************ 00:04:21.729 00:04:21.729 real 0m6.594s 00:04:21.729 user 0m4.518s 00:04:21.729 sys 0m1.311s 00:04:21.729 10:56:18 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:21.729 10:56:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.729 ************************************ 00:04:21.729 END TEST env 00:04:21.729 ************************************ 00:04:21.988 10:56:19 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.988 10:56:19 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:21.988 10:56:19 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:21.988 10:56:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.988 ************************************ 00:04:21.988 START TEST rpc 00:04:21.988 ************************************ 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.988 * Looking for test storage... 00:04:21.988 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:21.988 10:56:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1378016 00:04:21.988 10:56:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.988 10:56:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1378016 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@828 -- # '[' -z 1378016 ']' 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:21.988 10:56:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.988 10:56:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:21.988 [2024-05-15 10:56:19.196312] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:21.988 [2024-05-15 10:56:19.196404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378016 ] 00:04:21.988 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.247 [2024-05-15 10:56:19.265989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.247 [2024-05-15 10:56:19.344666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.247 [2024-05-15 10:56:19.344702] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1378016' to capture a snapshot of events at runtime. 00:04:22.247 [2024-05-15 10:56:19.344711] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.247 [2024-05-15 10:56:19.344720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.247 [2024-05-15 10:56:19.344727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1378016 for offline analysis/debug. 00:04:22.247 [2024-05-15 10:56:19.344758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.813 10:56:20 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:22.813 10:56:20 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:22.813 10:56:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:22.814 10:56:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:22.814 10:56:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.814 10:56:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.814 10:56:20 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:22.814 10:56:20 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:22.814 10:56:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.814 ************************************ 00:04:22.814 START TEST rpc_integrity 00:04:22.814 ************************************ 00:04:22.814 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:22.814 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.814 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.814 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.814 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:22.814 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.814 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.072 { 00:04:23.072 "name": "Malloc0", 00:04:23.072 "aliases": [ 00:04:23.072 "e8a1d193-fa7c-43ec-91b5-b39b688dcdc5" 00:04:23.072 ], 00:04:23.072 "product_name": "Malloc disk", 00:04:23.072 "block_size": 512, 00:04:23.072 "num_blocks": 16384, 00:04:23.072 "uuid": "e8a1d193-fa7c-43ec-91b5-b39b688dcdc5", 00:04:23.072 "assigned_rate_limits": { 00:04:23.072 "rw_ios_per_sec": 0, 00:04:23.072 "rw_mbytes_per_sec": 0, 00:04:23.072 "r_mbytes_per_sec": 0, 00:04:23.072 "w_mbytes_per_sec": 0 00:04:23.072 }, 00:04:23.072 "claimed": false, 00:04:23.072 "zoned": false, 00:04:23.072 "supported_io_types": { 00:04:23.072 "read": true, 00:04:23.072 "write": true, 00:04:23.072 "unmap": true, 00:04:23.072 "write_zeroes": true, 00:04:23.072 "flush": true, 00:04:23.072 "reset": true, 00:04:23.072 "compare": false, 00:04:23.072 "compare_and_write": false, 00:04:23.072 "abort": true, 00:04:23.072 "nvme_admin": false, 00:04:23.072 "nvme_io": false 00:04:23.072 }, 00:04:23.072 "memory_domains": [ 00:04:23.072 { 00:04:23.072 "dma_device_id": "system", 00:04:23.072 "dma_device_type": 1 00:04:23.072 }, 00:04:23.072 { 00:04:23.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.072 "dma_device_type": 2 00:04:23.072 } 00:04:23.072 ], 00:04:23.072 "driver_specific": {} 00:04:23.072 } 00:04:23.072 ]' 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 [2024-05-15 10:56:20.187360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.072 [2024-05-15 10:56:20.187399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.072 [2024-05-15 10:56:20.187421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x56a8060 00:04:23.072 [2024-05-15 10:56:20.187430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.072 [2024-05-15 10:56:20.188245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.072 [2024-05-15 10:56:20.188268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.072 Passthru0 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.072 { 00:04:23.072 "name": "Malloc0", 00:04:23.072 "aliases": [ 00:04:23.072 "e8a1d193-fa7c-43ec-91b5-b39b688dcdc5" 00:04:23.072 ], 00:04:23.072 "product_name": "Malloc disk", 00:04:23.072 "block_size": 512, 00:04:23.072 "num_blocks": 16384, 00:04:23.072 "uuid": "e8a1d193-fa7c-43ec-91b5-b39b688dcdc5", 00:04:23.072 "assigned_rate_limits": { 00:04:23.072 "rw_ios_per_sec": 0, 00:04:23.072 "rw_mbytes_per_sec": 0, 00:04:23.072 "r_mbytes_per_sec": 0, 00:04:23.072 "w_mbytes_per_sec": 0 00:04:23.072 }, 00:04:23.072 "claimed": true, 00:04:23.072 "claim_type": "exclusive_write", 00:04:23.072 "zoned": false, 00:04:23.072 "supported_io_types": { 00:04:23.072 "read": true, 00:04:23.072 "write": true, 00:04:23.072 "unmap": true, 00:04:23.072 "write_zeroes": true, 00:04:23.072 "flush": true, 00:04:23.072 "reset": true, 00:04:23.072 "compare": false, 00:04:23.072 "compare_and_write": false, 00:04:23.072 "abort": true, 00:04:23.072 "nvme_admin": false, 00:04:23.072 "nvme_io": false 00:04:23.072 }, 00:04:23.072 "memory_domains": [ 00:04:23.072 { 00:04:23.072 "dma_device_id": "system", 00:04:23.072 "dma_device_type": 1 00:04:23.072 }, 00:04:23.072 { 00:04:23.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.072 "dma_device_type": 2 00:04:23.072 } 00:04:23.072 ], 00:04:23.072 "driver_specific": {} 00:04:23.072 }, 00:04:23.072 { 00:04:23.072 "name": "Passthru0", 00:04:23.072 "aliases": [ 00:04:23.072 "6dfe1eec-2367-5697-997f-72eb7f1ae531" 00:04:23.072 ], 00:04:23.072 "product_name": "passthru", 00:04:23.072 "block_size": 512, 00:04:23.072 "num_blocks": 16384, 00:04:23.072 "uuid": "6dfe1eec-2367-5697-997f-72eb7f1ae531", 00:04:23.072 "assigned_rate_limits": { 00:04:23.072 "rw_ios_per_sec": 0, 00:04:23.072 "rw_mbytes_per_sec": 0, 00:04:23.072 "r_mbytes_per_sec": 0, 00:04:23.072 "w_mbytes_per_sec": 0 00:04:23.072 }, 00:04:23.072 "claimed": false, 00:04:23.072 "zoned": false, 00:04:23.072 "supported_io_types": { 00:04:23.072 "read": true, 00:04:23.072 "write": true, 00:04:23.072 "unmap": true, 00:04:23.072 "write_zeroes": true, 00:04:23.072 "flush": true, 00:04:23.072 "reset": true, 00:04:23.072 "compare": false, 00:04:23.072 "compare_and_write": false, 00:04:23.072 "abort": true, 00:04:23.072 "nvme_admin": false, 00:04:23.072 "nvme_io": false 00:04:23.072 }, 00:04:23.072 "memory_domains": [ 00:04:23.072 { 00:04:23.072 "dma_device_id": "system", 00:04:23.072 "dma_device_type": 1 00:04:23.072 }, 00:04:23.072 { 00:04:23.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.072 "dma_device_type": 2 00:04:23.072 } 00:04:23.072 ], 00:04:23.072 "driver_specific": { 00:04:23.072 "passthru": { 00:04:23.072 "name": "Passthru0", 00:04:23.072 "base_bdev_name": "Malloc0" 00:04:23.072 } 00:04:23.072 } 00:04:23.072 } 00:04:23.072 ]' 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.072 10:56:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.072 00:04:23.072 real 0m0.285s 00:04:23.072 user 0m0.176s 00:04:23.072 sys 0m0.052s 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:23.072 10:56:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.072 ************************************ 00:04:23.072 END TEST rpc_integrity 00:04:23.072 ************************************ 00:04:23.330 10:56:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.330 10:56:20 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:23.330 10:56:20 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:23.330 10:56:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.330 ************************************ 00:04:23.330 START TEST rpc_plugins 00:04:23.330 ************************************ 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.330 { 00:04:23.330 "name": "Malloc1", 00:04:23.330 "aliases": [ 00:04:23.330 "f6e7e958-e503-43a1-985d-cf66dccade3a" 00:04:23.330 ], 00:04:23.330 "product_name": "Malloc disk", 00:04:23.330 "block_size": 4096, 00:04:23.330 "num_blocks": 256, 00:04:23.330 "uuid": "f6e7e958-e503-43a1-985d-cf66dccade3a", 00:04:23.330 "assigned_rate_limits": { 00:04:23.330 "rw_ios_per_sec": 0, 00:04:23.330 "rw_mbytes_per_sec": 0, 00:04:23.330 "r_mbytes_per_sec": 0, 00:04:23.330 "w_mbytes_per_sec": 0 00:04:23.330 }, 00:04:23.330 "claimed": false, 00:04:23.330 "zoned": false, 00:04:23.330 "supported_io_types": { 00:04:23.330 "read": true, 00:04:23.330 "write": true, 00:04:23.330 "unmap": true, 00:04:23.330 "write_zeroes": true, 00:04:23.330 "flush": true, 00:04:23.330 "reset": true, 00:04:23.330 "compare": false, 00:04:23.330 "compare_and_write": false, 00:04:23.330 "abort": true, 00:04:23.330 "nvme_admin": false, 00:04:23.330 "nvme_io": false 00:04:23.330 }, 00:04:23.330 "memory_domains": [ 00:04:23.330 { 00:04:23.330 "dma_device_id": "system", 00:04:23.330 "dma_device_type": 1 00:04:23.330 }, 00:04:23.330 { 00:04:23.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.330 "dma_device_type": 2 00:04:23.330 } 00:04:23.330 ], 00:04:23.330 "driver_specific": {} 00:04:23.330 } 00:04:23.330 ]' 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.330 10:56:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.330 00:04:23.330 real 0m0.148s 00:04:23.330 user 0m0.092s 00:04:23.330 sys 0m0.020s 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:23.330 10:56:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.330 ************************************ 00:04:23.330 END TEST rpc_plugins 00:04:23.330 ************************************ 00:04:23.589 10:56:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.589 10:56:20 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:23.589 10:56:20 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:23.589 10:56:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.589 ************************************ 00:04:23.589 START TEST rpc_trace_cmd_test 00:04:23.589 ************************************ 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.589 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1378016", 00:04:23.589 "tpoint_group_mask": "0x8", 00:04:23.589 "iscsi_conn": { 00:04:23.589 "mask": "0x2", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "scsi": { 00:04:23.589 "mask": "0x4", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "bdev": { 00:04:23.589 "mask": "0x8", 00:04:23.589 "tpoint_mask": "0xffffffffffffffff" 00:04:23.589 }, 00:04:23.589 "nvmf_rdma": { 00:04:23.589 "mask": "0x10", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "nvmf_tcp": { 00:04:23.589 "mask": "0x20", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "ftl": { 00:04:23.589 "mask": "0x40", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "blobfs": { 00:04:23.589 "mask": "0x80", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "dsa": { 00:04:23.589 "mask": "0x200", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "thread": { 00:04:23.589 "mask": "0x400", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "nvme_pcie": { 00:04:23.589 "mask": "0x800", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "iaa": { 00:04:23.589 "mask": "0x1000", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "nvme_tcp": { 00:04:23.589 "mask": "0x2000", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "bdev_nvme": { 00:04:23.589 "mask": "0x4000", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 }, 00:04:23.589 "sock": { 00:04:23.589 "mask": "0x8000", 00:04:23.589 "tpoint_mask": "0x0" 00:04:23.589 } 00:04:23.589 }' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.589 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.847 10:56:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.847 00:04:23.847 real 0m0.223s 00:04:23.847 user 0m0.191s 00:04:23.847 sys 0m0.026s 00:04:23.847 10:56:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:23.847 10:56:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.847 ************************************ 00:04:23.847 END TEST rpc_trace_cmd_test 00:04:23.847 ************************************ 00:04:23.847 10:56:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.847 10:56:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.847 10:56:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.847 10:56:20 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:23.847 10:56:20 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:23.847 10:56:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.847 ************************************ 00:04:23.847 START TEST rpc_daemon_integrity 00:04:23.847 ************************************ 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.847 10:56:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.847 { 00:04:23.847 "name": "Malloc2", 00:04:23.847 "aliases": [ 00:04:23.847 "f579868a-6dd6-4719-ab1c-6890ac493df2" 00:04:23.847 ], 00:04:23.847 "product_name": "Malloc disk", 00:04:23.847 "block_size": 512, 00:04:23.847 "num_blocks": 16384, 00:04:23.847 "uuid": "f579868a-6dd6-4719-ab1c-6890ac493df2", 00:04:23.847 "assigned_rate_limits": { 00:04:23.847 "rw_ios_per_sec": 0, 00:04:23.847 "rw_mbytes_per_sec": 0, 00:04:23.847 "r_mbytes_per_sec": 0, 00:04:23.847 "w_mbytes_per_sec": 0 00:04:23.847 }, 00:04:23.847 "claimed": false, 00:04:23.847 "zoned": false, 00:04:23.847 "supported_io_types": { 00:04:23.847 "read": true, 00:04:23.847 "write": true, 00:04:23.847 "unmap": true, 00:04:23.847 "write_zeroes": true, 00:04:23.847 "flush": true, 00:04:23.847 "reset": true, 00:04:23.847 "compare": false, 00:04:23.847 "compare_and_write": false, 00:04:23.847 "abort": true, 00:04:23.847 "nvme_admin": false, 00:04:23.847 "nvme_io": false 00:04:23.847 }, 00:04:23.847 "memory_domains": [ 00:04:23.847 { 00:04:23.847 "dma_device_id": "system", 00:04:23.847 "dma_device_type": 1 00:04:23.847 }, 00:04:23.847 { 00:04:23.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.847 "dma_device_type": 2 00:04:23.847 } 00:04:23.847 ], 00:04:23.847 "driver_specific": {} 00:04:23.847 } 00:04:23.847 ]' 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.847 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 [2024-05-15 10:56:21.113775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.107 [2024-05-15 10:56:21.113807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.107 [2024-05-15 10:56:21.113825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x56a9960 00:04:24.107 [2024-05-15 10:56:21.113835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.107 [2024-05-15 10:56:21.114552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.107 [2024-05-15 10:56:21.114574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.107 Passthru0 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.107 { 00:04:24.107 "name": "Malloc2", 00:04:24.107 "aliases": [ 00:04:24.107 "f579868a-6dd6-4719-ab1c-6890ac493df2" 00:04:24.107 ], 00:04:24.107 "product_name": "Malloc disk", 00:04:24.107 "block_size": 512, 00:04:24.107 "num_blocks": 16384, 00:04:24.107 "uuid": "f579868a-6dd6-4719-ab1c-6890ac493df2", 00:04:24.107 "assigned_rate_limits": { 00:04:24.107 "rw_ios_per_sec": 0, 00:04:24.107 "rw_mbytes_per_sec": 0, 00:04:24.107 "r_mbytes_per_sec": 0, 00:04:24.107 "w_mbytes_per_sec": 0 00:04:24.107 }, 00:04:24.107 "claimed": true, 00:04:24.107 "claim_type": "exclusive_write", 00:04:24.107 "zoned": false, 00:04:24.107 "supported_io_types": { 00:04:24.107 "read": true, 00:04:24.107 "write": true, 00:04:24.107 "unmap": true, 00:04:24.107 "write_zeroes": true, 00:04:24.107 "flush": true, 00:04:24.107 "reset": true, 00:04:24.107 "compare": false, 00:04:24.107 "compare_and_write": false, 00:04:24.107 "abort": true, 00:04:24.107 "nvme_admin": false, 00:04:24.107 "nvme_io": false 00:04:24.107 }, 00:04:24.107 "memory_domains": [ 00:04:24.107 { 00:04:24.107 "dma_device_id": "system", 00:04:24.107 "dma_device_type": 1 00:04:24.107 }, 00:04:24.107 { 00:04:24.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.107 "dma_device_type": 2 00:04:24.107 } 00:04:24.107 ], 00:04:24.107 "driver_specific": {} 00:04:24.107 }, 00:04:24.107 { 00:04:24.107 "name": "Passthru0", 00:04:24.107 "aliases": [ 00:04:24.107 "deee6084-684a-5b05-b488-3dfc41f2e101" 00:04:24.107 ], 00:04:24.107 "product_name": "passthru", 00:04:24.107 "block_size": 512, 00:04:24.107 "num_blocks": 16384, 00:04:24.107 "uuid": "deee6084-684a-5b05-b488-3dfc41f2e101", 00:04:24.107 "assigned_rate_limits": { 00:04:24.107 "rw_ios_per_sec": 0, 00:04:24.107 "rw_mbytes_per_sec": 0, 00:04:24.107 "r_mbytes_per_sec": 0, 00:04:24.107 "w_mbytes_per_sec": 0 00:04:24.107 }, 00:04:24.107 "claimed": false, 00:04:24.107 "zoned": false, 00:04:24.107 "supported_io_types": { 00:04:24.107 "read": true, 00:04:24.107 "write": true, 00:04:24.107 "unmap": true, 00:04:24.107 "write_zeroes": true, 00:04:24.107 "flush": true, 00:04:24.107 "reset": true, 00:04:24.107 "compare": false, 00:04:24.107 "compare_and_write": false, 00:04:24.107 "abort": true, 00:04:24.107 "nvme_admin": false, 00:04:24.107 "nvme_io": false 00:04:24.107 }, 00:04:24.107 "memory_domains": [ 00:04:24.107 { 00:04:24.107 "dma_device_id": "system", 00:04:24.107 "dma_device_type": 1 00:04:24.107 }, 00:04:24.107 { 00:04:24.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.107 "dma_device_type": 2 00:04:24.107 } 00:04:24.107 ], 00:04:24.107 "driver_specific": { 00:04:24.107 "passthru": { 00:04:24.107 "name": "Passthru0", 00:04:24.107 "base_bdev_name": "Malloc2" 00:04:24.107 } 00:04:24.107 } 00:04:24.107 } 00:04:24.107 ]' 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.107 00:04:24.107 real 0m0.273s 00:04:24.107 user 0m0.169s 00:04:24.107 sys 0m0.051s 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:24.107 10:56:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 ************************************ 00:04:24.107 END TEST rpc_daemon_integrity 00:04:24.107 ************************************ 00:04:24.107 10:56:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.107 10:56:21 rpc -- rpc/rpc.sh@84 -- # killprocess 1378016 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@947 -- # '[' -z 1378016 ']' 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@951 -- # kill -0 1378016 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@952 -- # uname 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1378016 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1378016' 00:04:24.107 killing process with pid 1378016 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@966 -- # kill 1378016 00:04:24.107 10:56:21 rpc -- common/autotest_common.sh@971 -- # wait 1378016 00:04:24.675 00:04:24.675 real 0m2.580s 00:04:24.675 user 0m3.262s 00:04:24.675 sys 0m0.823s 00:04:24.675 10:56:21 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:24.675 10:56:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.675 ************************************ 00:04:24.675 END TEST rpc 00:04:24.675 ************************************ 00:04:24.675 10:56:21 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.675 10:56:21 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:24.675 10:56:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:24.675 10:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.675 ************************************ 00:04:24.675 START TEST skip_rpc 00:04:24.675 ************************************ 00:04:24.675 10:56:21 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.675 * Looking for test storage... 00:04:24.675 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:24.675 10:56:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:24.675 10:56:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:24.675 10:56:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:24.675 10:56:21 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:24.675 10:56:21 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:24.675 10:56:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.675 ************************************ 00:04:24.675 START TEST skip_rpc 00:04:24.675 ************************************ 00:04:24.675 10:56:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:24.675 10:56:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1378719 00:04:24.675 10:56:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.675 10:56:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:24.675 10:56:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:24.675 [2024-05-15 10:56:21.893539] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:24.675 [2024-05-15 10:56:21.893620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378719 ] 00:04:24.675 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.051 [2024-05-15 10:56:21.963818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.051 [2024-05-15 10:56:22.036167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:30.338 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1378719 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 1378719 ']' 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 1378719 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1378719 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1378719' 00:04:30.339 killing process with pid 1378719 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 1378719 00:04:30.339 10:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 1378719 00:04:30.339 00:04:30.339 real 0m5.358s 00:04:30.339 user 0m5.114s 00:04:30.339 sys 0m0.275s 00:04:30.339 10:56:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:30.339 10:56:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.339 ************************************ 00:04:30.339 END TEST skip_rpc 00:04:30.339 ************************************ 00:04:30.339 10:56:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.339 10:56:27 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:30.339 10:56:27 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:30.339 10:56:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.339 ************************************ 00:04:30.339 START TEST skip_rpc_with_json 00:04:30.339 ************************************ 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1379565 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1379565 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 1379565 ']' 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.339 10:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.339 [2024-05-15 10:56:27.329650] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:30.339 [2024-05-15 10:56:27.329723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379565 ] 00:04:30.339 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.339 [2024-05-15 10:56:27.401480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.339 [2024-05-15 10:56:27.479481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.907 [2024-05-15 10:56:28.129096] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.907 request: 00:04:30.907 { 00:04:30.907 "trtype": "tcp", 00:04:30.907 "method": "nvmf_get_transports", 00:04:30.907 "req_id": 1 00:04:30.907 } 00:04:30.907 Got JSON-RPC error response 00:04:30.907 response: 00:04:30.907 { 00:04:30.907 "code": -19, 00:04:30.907 "message": "No such device" 00:04:30.907 } 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.907 [2024-05-15 10:56:28.141190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:30.907 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.167 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:31.167 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:31.167 { 00:04:31.167 "subsystems": [ 00:04:31.167 { 00:04:31.167 "subsystem": "scheduler", 00:04:31.167 "config": [ 00:04:31.167 { 00:04:31.167 "method": "framework_set_scheduler", 00:04:31.167 "params": { 00:04:31.167 "name": "static" 00:04:31.167 } 00:04:31.167 } 00:04:31.167 ] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "vmd", 00:04:31.167 "config": [] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "sock", 00:04:31.167 "config": [ 00:04:31.167 { 00:04:31.167 "method": "sock_impl_set_options", 00:04:31.167 "params": { 00:04:31.167 "impl_name": "posix", 00:04:31.167 "recv_buf_size": 2097152, 00:04:31.167 "send_buf_size": 2097152, 00:04:31.167 "enable_recv_pipe": true, 00:04:31.167 "enable_quickack": false, 00:04:31.167 "enable_placement_id": 0, 00:04:31.167 "enable_zerocopy_send_server": true, 00:04:31.167 "enable_zerocopy_send_client": false, 00:04:31.167 "zerocopy_threshold": 0, 00:04:31.167 "tls_version": 0, 00:04:31.167 "enable_ktls": false 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "sock_impl_set_options", 00:04:31.167 "params": { 00:04:31.167 "impl_name": "ssl", 00:04:31.167 "recv_buf_size": 4096, 00:04:31.167 "send_buf_size": 4096, 00:04:31.167 "enable_recv_pipe": true, 00:04:31.167 "enable_quickack": false, 00:04:31.167 "enable_placement_id": 0, 00:04:31.167 "enable_zerocopy_send_server": true, 00:04:31.167 "enable_zerocopy_send_client": false, 00:04:31.167 "zerocopy_threshold": 0, 00:04:31.167 "tls_version": 0, 00:04:31.167 "enable_ktls": false 00:04:31.167 } 00:04:31.167 } 00:04:31.167 ] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "iobuf", 00:04:31.167 "config": [ 00:04:31.167 { 00:04:31.167 "method": "iobuf_set_options", 00:04:31.167 "params": { 00:04:31.167 "small_pool_count": 8192, 00:04:31.167 "large_pool_count": 1024, 00:04:31.167 "small_bufsize": 8192, 00:04:31.167 "large_bufsize": 135168 00:04:31.167 } 00:04:31.167 } 00:04:31.167 ] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "keyring", 00:04:31.167 "config": [] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "vfio_user_target", 00:04:31.167 "config": null 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "accel", 00:04:31.167 "config": [ 00:04:31.167 { 00:04:31.167 "method": "accel_set_options", 00:04:31.167 "params": { 00:04:31.167 "small_cache_size": 128, 00:04:31.167 "large_cache_size": 16, 00:04:31.167 "task_count": 2048, 00:04:31.167 "sequence_count": 2048, 00:04:31.167 "buf_count": 2048 00:04:31.167 } 00:04:31.167 } 00:04:31.167 ] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "bdev", 00:04:31.167 "config": [ 00:04:31.167 { 00:04:31.167 "method": "bdev_set_options", 00:04:31.167 "params": { 00:04:31.167 "bdev_io_pool_size": 65535, 00:04:31.167 "bdev_io_cache_size": 256, 00:04:31.167 "bdev_auto_examine": true, 00:04:31.167 "iobuf_small_cache_size": 128, 00:04:31.167 "iobuf_large_cache_size": 16 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "bdev_raid_set_options", 00:04:31.167 "params": { 00:04:31.167 "process_window_size_kb": 1024 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "bdev_nvme_set_options", 00:04:31.167 "params": { 00:04:31.167 "action_on_timeout": "none", 00:04:31.167 "timeout_us": 0, 00:04:31.167 "timeout_admin_us": 0, 00:04:31.167 "keep_alive_timeout_ms": 10000, 00:04:31.167 "arbitration_burst": 0, 00:04:31.167 "low_priority_weight": 0, 00:04:31.167 "medium_priority_weight": 0, 00:04:31.167 "high_priority_weight": 0, 00:04:31.167 "nvme_adminq_poll_period_us": 10000, 00:04:31.167 "nvme_ioq_poll_period_us": 0, 00:04:31.167 "io_queue_requests": 0, 00:04:31.167 "delay_cmd_submit": true, 00:04:31.167 "transport_retry_count": 4, 00:04:31.167 "bdev_retry_count": 3, 00:04:31.167 "transport_ack_timeout": 0, 00:04:31.167 "ctrlr_loss_timeout_sec": 0, 00:04:31.167 "reconnect_delay_sec": 0, 00:04:31.167 "fast_io_fail_timeout_sec": 0, 00:04:31.167 "disable_auto_failback": false, 00:04:31.167 "generate_uuids": false, 00:04:31.167 "transport_tos": 0, 00:04:31.167 "nvme_error_stat": false, 00:04:31.167 "rdma_srq_size": 0, 00:04:31.167 "io_path_stat": false, 00:04:31.167 "allow_accel_sequence": false, 00:04:31.167 "rdma_max_cq_size": 0, 00:04:31.167 "rdma_cm_event_timeout_ms": 0, 00:04:31.167 "dhchap_digests": [ 00:04:31.167 "sha256", 00:04:31.167 "sha384", 00:04:31.167 "sha512" 00:04:31.167 ], 00:04:31.167 "dhchap_dhgroups": [ 00:04:31.167 "null", 00:04:31.167 "ffdhe2048", 00:04:31.167 "ffdhe3072", 00:04:31.167 "ffdhe4096", 00:04:31.167 "ffdhe6144", 00:04:31.167 "ffdhe8192" 00:04:31.167 ] 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "bdev_nvme_set_hotplug", 00:04:31.167 "params": { 00:04:31.167 "period_us": 100000, 00:04:31.167 "enable": false 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "bdev_iscsi_set_options", 00:04:31.167 "params": { 00:04:31.167 "timeout_sec": 30 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "bdev_wait_for_examine" 00:04:31.167 } 00:04:31.167 ] 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "subsystem": "nvmf", 00:04:31.167 "config": [ 00:04:31.167 { 00:04:31.167 "method": "nvmf_set_config", 00:04:31.167 "params": { 00:04:31.167 "discovery_filter": "match_any", 00:04:31.167 "admin_cmd_passthru": { 00:04:31.167 "identify_ctrlr": false 00:04:31.167 } 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "nvmf_set_max_subsystems", 00:04:31.167 "params": { 00:04:31.167 "max_subsystems": 1024 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "nvmf_set_crdt", 00:04:31.167 "params": { 00:04:31.167 "crdt1": 0, 00:04:31.167 "crdt2": 0, 00:04:31.167 "crdt3": 0 00:04:31.167 } 00:04:31.167 }, 00:04:31.167 { 00:04:31.167 "method": "nvmf_create_transport", 00:04:31.167 "params": { 00:04:31.167 "trtype": "TCP", 00:04:31.167 "max_queue_depth": 128, 00:04:31.167 "max_io_qpairs_per_ctrlr": 127, 00:04:31.167 "in_capsule_data_size": 4096, 00:04:31.167 "max_io_size": 131072, 00:04:31.167 "io_unit_size": 131072, 00:04:31.167 "max_aq_depth": 128, 00:04:31.167 "num_shared_buffers": 511, 00:04:31.167 "buf_cache_size": 4294967295, 00:04:31.167 "dif_insert_or_strip": false, 00:04:31.167 "zcopy": false, 00:04:31.167 "c2h_success": true, 00:04:31.167 "sock_priority": 0, 00:04:31.168 "abort_timeout_sec": 1, 00:04:31.168 "ack_timeout": 0, 00:04:31.168 "data_wr_pool_size": 0 00:04:31.168 } 00:04:31.168 } 00:04:31.168 ] 00:04:31.168 }, 00:04:31.168 { 00:04:31.168 "subsystem": "nbd", 00:04:31.168 "config": [] 00:04:31.168 }, 00:04:31.168 { 00:04:31.168 "subsystem": "ublk", 00:04:31.168 "config": [] 00:04:31.168 }, 00:04:31.168 { 00:04:31.168 "subsystem": "vhost_blk", 00:04:31.168 "config": [] 00:04:31.168 }, 00:04:31.168 { 00:04:31.168 "subsystem": "scsi", 00:04:31.168 "config": null 00:04:31.168 }, 00:04:31.168 { 00:04:31.168 "subsystem": "iscsi", 00:04:31.168 "config": [ 00:04:31.168 { 00:04:31.168 "method": "iscsi_set_options", 00:04:31.168 "params": { 00:04:31.168 "node_base": "iqn.2016-06.io.spdk", 00:04:31.168 "max_sessions": 128, 00:04:31.168 "max_connections_per_session": 2, 00:04:31.168 "max_queue_depth": 64, 00:04:31.168 "default_time2wait": 2, 00:04:31.168 "default_time2retain": 20, 00:04:31.168 "first_burst_length": 8192, 00:04:31.168 "immediate_data": true, 00:04:31.168 "allow_duplicated_isid": false, 00:04:31.168 "error_recovery_level": 0, 00:04:31.168 "nop_timeout": 60, 00:04:31.168 "nop_in_interval": 30, 00:04:31.168 "disable_chap": false, 00:04:31.168 "require_chap": false, 00:04:31.168 "mutual_chap": false, 00:04:31.168 "chap_group": 0, 00:04:31.168 "max_large_datain_per_connection": 64, 00:04:31.168 "max_r2t_per_connection": 4, 00:04:31.168 "pdu_pool_size": 36864, 00:04:31.168 "immediate_data_pool_size": 16384, 00:04:31.168 "data_out_pool_size": 2048 00:04:31.168 } 00:04:31.168 } 00:04:31.168 ] 00:04:31.168 }, 00:04:31.168 { 00:04:31.168 "subsystem": "vhost_scsi", 00:04:31.168 "config": [] 00:04:31.168 } 00:04:31.168 ] 00:04:31.168 } 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1379565 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 1379565 ']' 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 1379565 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1379565 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1379565' 00:04:31.168 killing process with pid 1379565 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 1379565 00:04:31.168 10:56:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 1379565 00:04:31.427 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1379839 00:04:31.427 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:31.427 10:56:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1379839 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 1379839 ']' 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 1379839 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1379839 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1379839' 00:04:36.708 killing process with pid 1379839 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 1379839 00:04:36.708 10:56:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 1379839 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:36.968 00:04:36.968 real 0m6.711s 00:04:36.968 user 0m6.500s 00:04:36.968 sys 0m0.610s 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 ************************************ 00:04:36.968 END TEST skip_rpc_with_json 00:04:36.968 ************************************ 00:04:36.968 10:56:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:36.968 10:56:34 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:36.968 10:56:34 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:36.968 10:56:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 ************************************ 00:04:36.968 START TEST skip_rpc_with_delay 00:04:36.968 ************************************ 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.968 [2024-05-15 10:56:34.119114] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:36.968 [2024-05-15 10:56:34.119248] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:36.968 00:04:36.968 real 0m0.040s 00:04:36.968 user 0m0.021s 00:04:36.968 sys 0m0.020s 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:36.968 10:56:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 ************************************ 00:04:36.968 END TEST skip_rpc_with_delay 00:04:36.968 ************************************ 00:04:36.968 10:56:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:36.968 10:56:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:36.968 10:56:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:36.968 10:56:34 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:36.968 10:56:34 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:36.968 10:56:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 ************************************ 00:04:36.968 START TEST exit_on_failed_rpc_init 00:04:36.968 ************************************ 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1380944 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1380944 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 1380944 ']' 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 10:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.968 [2024-05-15 10:56:34.228230] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:36.968 [2024-05-15 10:56:34.228306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380944 ] 00:04:37.227 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.227 [2024-05-15 10:56:34.298238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.227 [2024-05-15 10:56:34.376250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.797 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.057 [2024-05-15 10:56:35.073200] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:38.057 [2024-05-15 10:56:35.073272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380976 ] 00:04:38.057 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.057 [2024-05-15 10:56:35.141640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.057 [2024-05-15 10:56:35.215052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.057 [2024-05-15 10:56:35.215138] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:38.057 [2024-05-15 10:56:35.215151] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:38.057 [2024-05-15 10:56:35.215158] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1380944 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 1380944 ']' 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 1380944 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:38.057 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1380944 00:04:38.316 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:38.316 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:38.316 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1380944' 00:04:38.316 killing process with pid 1380944 00:04:38.316 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 1380944 00:04:38.316 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 1380944 00:04:38.576 00:04:38.576 real 0m1.430s 00:04:38.576 user 0m1.584s 00:04:38.576 sys 0m0.445s 00:04:38.576 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.576 10:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.576 ************************************ 00:04:38.576 END TEST exit_on_failed_rpc_init 00:04:38.576 ************************************ 00:04:38.576 10:56:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:38.576 00:04:38.576 real 0m13.940s 00:04:38.576 user 0m13.361s 00:04:38.576 sys 0m1.621s 00:04:38.576 10:56:35 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.576 10:56:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.576 ************************************ 00:04:38.576 END TEST skip_rpc 00:04:38.576 ************************************ 00:04:38.576 10:56:35 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:38.576 10:56:35 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:38.576 10:56:35 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:38.576 10:56:35 -- common/autotest_common.sh@10 -- # set +x 00:04:38.576 ************************************ 00:04:38.576 START TEST rpc_client 00:04:38.576 ************************************ 00:04:38.576 10:56:35 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:38.836 * Looking for test storage... 00:04:38.836 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:38.836 10:56:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:38.836 OK 00:04:38.836 10:56:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:38.836 00:04:38.836 real 0m0.131s 00:04:38.836 user 0m0.050s 00:04:38.836 sys 0m0.090s 00:04:38.836 10:56:35 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.836 10:56:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:38.836 ************************************ 00:04:38.836 END TEST rpc_client 00:04:38.836 ************************************ 00:04:38.836 10:56:35 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.836 10:56:35 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:38.836 10:56:35 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:38.836 10:56:35 -- common/autotest_common.sh@10 -- # set +x 00:04:38.836 ************************************ 00:04:38.836 START TEST json_config 00:04:38.836 ************************************ 00:04:38.836 10:56:35 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:38.836 10:56:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.836 10:56:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.836 10:56:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.836 10:56:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 10:56:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 10:56:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 10:56:36 json_config -- paths/export.sh@5 -- # export PATH 00:04:38.836 10:56:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@47 -- # : 0 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:38.836 10:56:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:38.836 WARNING: No tests are enabled so not running JSON configuration tests 00:04:38.836 10:56:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:38.836 00:04:38.836 real 0m0.109s 00:04:38.836 user 0m0.049s 00:04:38.836 sys 0m0.061s 00:04:38.836 10:56:36 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.836 10:56:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.836 ************************************ 00:04:38.836 END TEST json_config 00:04:38.836 ************************************ 00:04:39.096 10:56:36 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.096 10:56:36 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:39.096 10:56:36 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.096 10:56:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.096 ************************************ 00:04:39.096 START TEST json_config_extra_key 00:04:39.096 ************************************ 00:04:39.096 10:56:36 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.096 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.096 10:56:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:39.096 10:56:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.096 10:56:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.096 10:56:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.096 10:56:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.096 10:56:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.096 10:56:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.096 10:56:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.097 10:56:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.097 10:56:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.097 INFO: launching applications... 00:04:39.097 10:56:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1381369 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.097 Waiting for target to run... 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1381369 /var/tmp/spdk_tgt.sock 00:04:39.097 10:56:36 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 1381369 ']' 00:04:39.097 10:56:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.097 10:56:36 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.097 10:56:36 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:39.097 10:56:36 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.097 10:56:36 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:39.097 10:56:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.097 [2024-05-15 10:56:36.299670] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:39.097 [2024-05-15 10:56:36.299753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381369 ] 00:04:39.097 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.665 [2024-05-15 10:56:36.731756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.665 [2024-05-15 10:56:36.811942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.925 10:56:37 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:39.925 10:56:37 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.925 00:04:39.925 10:56:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.925 INFO: shutting down applications... 00:04:39.925 10:56:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1381369 ]] 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1381369 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1381369 00:04:39.925 10:56:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1381369 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.494 10:56:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.494 SPDK target shutdown done 00:04:40.494 10:56:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.494 Success 00:04:40.494 00:04:40.494 real 0m1.448s 00:04:40.494 user 0m1.023s 00:04:40.494 sys 0m0.556s 00:04:40.494 10:56:37 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.494 10:56:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.494 ************************************ 00:04:40.494 END TEST json_config_extra_key 00:04:40.494 ************************************ 00:04:40.494 10:56:37 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.494 10:56:37 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:40.494 10:56:37 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:40.494 10:56:37 -- common/autotest_common.sh@10 -- # set +x 00:04:40.494 ************************************ 00:04:40.494 START TEST alias_rpc 00:04:40.494 ************************************ 00:04:40.494 10:56:37 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.754 * Looking for test storage... 00:04:40.754 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:40.754 10:56:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.754 10:56:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.754 10:56:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1381691 00:04:40.754 10:56:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1381691 00:04:40.754 10:56:37 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 1381691 ']' 00:04:40.754 10:56:37 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.754 10:56:37 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:40.754 10:56:37 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.754 10:56:37 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:40.754 10:56:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.754 [2024-05-15 10:56:37.803631] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:40.754 [2024-05-15 10:56:37.803687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381691 ] 00:04:40.754 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.754 [2024-05-15 10:56:37.870251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.754 [2024-05-15 10:56:37.947849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:41.692 10:56:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:41.692 10:56:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1381691 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 1381691 ']' 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 1381691 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1381691 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1381691' 00:04:41.692 killing process with pid 1381691 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@966 -- # kill 1381691 00:04:41.692 10:56:38 alias_rpc -- common/autotest_common.sh@971 -- # wait 1381691 00:04:41.952 00:04:41.952 real 0m1.499s 00:04:41.952 user 0m1.615s 00:04:41.952 sys 0m0.434s 00:04:41.952 10:56:39 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:41.952 10:56:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.952 ************************************ 00:04:41.952 END TEST alias_rpc 00:04:41.952 ************************************ 00:04:42.212 10:56:39 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:42.212 10:56:39 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.212 10:56:39 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:42.212 10:56:39 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:42.212 10:56:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.212 ************************************ 00:04:42.212 START TEST spdkcli_tcp 00:04:42.212 ************************************ 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.212 * Looking for test storage... 00:04:42.212 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1382011 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1382011 00:04:42.212 10:56:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 1382011 ']' 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:42.212 10:56:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.212 [2024-05-15 10:56:39.399287] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:42.212 [2024-05-15 10:56:39.399407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382011 ] 00:04:42.212 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.212 [2024-05-15 10:56:39.469218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.472 [2024-05-15 10:56:39.544426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.472 [2024-05-15 10:56:39.544429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.041 10:56:40 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:43.041 10:56:40 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:04:43.041 10:56:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1382275 00:04:43.041 10:56:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.041 10:56:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.301 [ 00:04:43.301 "spdk_get_version", 00:04:43.301 "rpc_get_methods", 00:04:43.301 "trace_get_info", 00:04:43.301 "trace_get_tpoint_group_mask", 00:04:43.301 "trace_disable_tpoint_group", 00:04:43.301 "trace_enable_tpoint_group", 00:04:43.301 "trace_clear_tpoint_mask", 00:04:43.301 "trace_set_tpoint_mask", 00:04:43.301 "vfu_tgt_set_base_path", 00:04:43.301 "framework_get_pci_devices", 00:04:43.301 "framework_get_config", 00:04:43.301 "framework_get_subsystems", 00:04:43.301 "keyring_get_keys", 00:04:43.301 "iobuf_get_stats", 00:04:43.301 "iobuf_set_options", 00:04:43.301 "sock_get_default_impl", 00:04:43.301 "sock_set_default_impl", 00:04:43.301 "sock_impl_set_options", 00:04:43.301 "sock_impl_get_options", 00:04:43.301 "vmd_rescan", 00:04:43.301 "vmd_remove_device", 00:04:43.301 "vmd_enable", 00:04:43.301 "accel_get_stats", 00:04:43.301 "accel_set_options", 00:04:43.301 "accel_set_driver", 00:04:43.301 "accel_crypto_key_destroy", 00:04:43.301 "accel_crypto_keys_get", 00:04:43.301 "accel_crypto_key_create", 00:04:43.301 "accel_assign_opc", 00:04:43.301 "accel_get_module_info", 00:04:43.301 "accel_get_opc_assignments", 00:04:43.301 "notify_get_notifications", 00:04:43.301 "notify_get_types", 00:04:43.301 "bdev_get_histogram", 00:04:43.301 "bdev_enable_histogram", 00:04:43.301 "bdev_set_qos_limit", 00:04:43.301 "bdev_set_qd_sampling_period", 00:04:43.301 "bdev_get_bdevs", 00:04:43.301 "bdev_reset_iostat", 00:04:43.301 "bdev_get_iostat", 00:04:43.301 "bdev_examine", 00:04:43.301 "bdev_wait_for_examine", 00:04:43.301 "bdev_set_options", 00:04:43.301 "scsi_get_devices", 00:04:43.301 "thread_set_cpumask", 00:04:43.301 "framework_get_scheduler", 00:04:43.301 "framework_set_scheduler", 00:04:43.301 "framework_get_reactors", 00:04:43.301 "thread_get_io_channels", 00:04:43.301 "thread_get_pollers", 00:04:43.301 "thread_get_stats", 00:04:43.301 "framework_monitor_context_switch", 00:04:43.301 "spdk_kill_instance", 00:04:43.301 "log_enable_timestamps", 00:04:43.301 "log_get_flags", 00:04:43.301 "log_clear_flag", 00:04:43.301 "log_set_flag", 00:04:43.301 "log_get_level", 00:04:43.301 "log_set_level", 00:04:43.301 "log_get_print_level", 00:04:43.301 "log_set_print_level", 00:04:43.301 "framework_enable_cpumask_locks", 00:04:43.301 "framework_disable_cpumask_locks", 00:04:43.301 "framework_wait_init", 00:04:43.301 "framework_start_init", 00:04:43.301 "virtio_blk_create_transport", 00:04:43.301 "virtio_blk_get_transports", 00:04:43.301 "vhost_controller_set_coalescing", 00:04:43.301 "vhost_get_controllers", 00:04:43.301 "vhost_delete_controller", 00:04:43.301 "vhost_create_blk_controller", 00:04:43.301 "vhost_scsi_controller_remove_target", 00:04:43.301 "vhost_scsi_controller_add_target", 00:04:43.301 "vhost_start_scsi_controller", 00:04:43.301 "vhost_create_scsi_controller", 00:04:43.301 "ublk_recover_disk", 00:04:43.301 "ublk_get_disks", 00:04:43.301 "ublk_stop_disk", 00:04:43.301 "ublk_start_disk", 00:04:43.301 "ublk_destroy_target", 00:04:43.301 "ublk_create_target", 00:04:43.301 "nbd_get_disks", 00:04:43.301 "nbd_stop_disk", 00:04:43.301 "nbd_start_disk", 00:04:43.301 "env_dpdk_get_mem_stats", 00:04:43.301 "nvmf_stop_mdns_prr", 00:04:43.301 "nvmf_publish_mdns_prr", 00:04:43.301 "nvmf_subsystem_get_listeners", 00:04:43.301 "nvmf_subsystem_get_qpairs", 00:04:43.301 "nvmf_subsystem_get_controllers", 00:04:43.301 "nvmf_get_stats", 00:04:43.301 "nvmf_get_transports", 00:04:43.301 "nvmf_create_transport", 00:04:43.301 "nvmf_get_targets", 00:04:43.301 "nvmf_delete_target", 00:04:43.301 "nvmf_create_target", 00:04:43.301 "nvmf_subsystem_allow_any_host", 00:04:43.301 "nvmf_subsystem_remove_host", 00:04:43.301 "nvmf_subsystem_add_host", 00:04:43.301 "nvmf_ns_remove_host", 00:04:43.301 "nvmf_ns_add_host", 00:04:43.301 "nvmf_subsystem_remove_ns", 00:04:43.301 "nvmf_subsystem_add_ns", 00:04:43.301 "nvmf_subsystem_listener_set_ana_state", 00:04:43.301 "nvmf_discovery_get_referrals", 00:04:43.301 "nvmf_discovery_remove_referral", 00:04:43.301 "nvmf_discovery_add_referral", 00:04:43.301 "nvmf_subsystem_remove_listener", 00:04:43.301 "nvmf_subsystem_add_listener", 00:04:43.301 "nvmf_delete_subsystem", 00:04:43.301 "nvmf_create_subsystem", 00:04:43.301 "nvmf_get_subsystems", 00:04:43.301 "nvmf_set_crdt", 00:04:43.301 "nvmf_set_config", 00:04:43.301 "nvmf_set_max_subsystems", 00:04:43.301 "iscsi_get_histogram", 00:04:43.301 "iscsi_enable_histogram", 00:04:43.301 "iscsi_set_options", 00:04:43.301 "iscsi_get_auth_groups", 00:04:43.301 "iscsi_auth_group_remove_secret", 00:04:43.301 "iscsi_auth_group_add_secret", 00:04:43.301 "iscsi_delete_auth_group", 00:04:43.301 "iscsi_create_auth_group", 00:04:43.301 "iscsi_set_discovery_auth", 00:04:43.301 "iscsi_get_options", 00:04:43.301 "iscsi_target_node_request_logout", 00:04:43.301 "iscsi_target_node_set_redirect", 00:04:43.301 "iscsi_target_node_set_auth", 00:04:43.301 "iscsi_target_node_add_lun", 00:04:43.301 "iscsi_get_stats", 00:04:43.301 "iscsi_get_connections", 00:04:43.301 "iscsi_portal_group_set_auth", 00:04:43.301 "iscsi_start_portal_group", 00:04:43.301 "iscsi_delete_portal_group", 00:04:43.301 "iscsi_create_portal_group", 00:04:43.301 "iscsi_get_portal_groups", 00:04:43.301 "iscsi_delete_target_node", 00:04:43.301 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.301 "iscsi_target_node_add_pg_ig_maps", 00:04:43.301 "iscsi_create_target_node", 00:04:43.301 "iscsi_get_target_nodes", 00:04:43.301 "iscsi_delete_initiator_group", 00:04:43.301 "iscsi_initiator_group_remove_initiators", 00:04:43.301 "iscsi_initiator_group_add_initiators", 00:04:43.301 "iscsi_create_initiator_group", 00:04:43.301 "iscsi_get_initiator_groups", 00:04:43.301 "keyring_file_remove_key", 00:04:43.301 "keyring_file_add_key", 00:04:43.301 "vfu_virtio_create_scsi_endpoint", 00:04:43.301 "vfu_virtio_scsi_remove_target", 00:04:43.301 "vfu_virtio_scsi_add_target", 00:04:43.301 "vfu_virtio_create_blk_endpoint", 00:04:43.301 "vfu_virtio_delete_endpoint", 00:04:43.301 "iaa_scan_accel_module", 00:04:43.301 "dsa_scan_accel_module", 00:04:43.301 "ioat_scan_accel_module", 00:04:43.301 "accel_error_inject_error", 00:04:43.301 "bdev_iscsi_delete", 00:04:43.301 "bdev_iscsi_create", 00:04:43.301 "bdev_iscsi_set_options", 00:04:43.301 "bdev_virtio_attach_controller", 00:04:43.301 "bdev_virtio_scsi_get_devices", 00:04:43.301 "bdev_virtio_detach_controller", 00:04:43.301 "bdev_virtio_blk_set_hotplug", 00:04:43.301 "bdev_ftl_set_property", 00:04:43.301 "bdev_ftl_get_properties", 00:04:43.301 "bdev_ftl_get_stats", 00:04:43.301 "bdev_ftl_unmap", 00:04:43.301 "bdev_ftl_unload", 00:04:43.301 "bdev_ftl_delete", 00:04:43.301 "bdev_ftl_load", 00:04:43.301 "bdev_ftl_create", 00:04:43.301 "bdev_aio_delete", 00:04:43.301 "bdev_aio_rescan", 00:04:43.301 "bdev_aio_create", 00:04:43.301 "blobfs_create", 00:04:43.301 "blobfs_detect", 00:04:43.301 "blobfs_set_cache_size", 00:04:43.301 "bdev_zone_block_delete", 00:04:43.301 "bdev_zone_block_create", 00:04:43.301 "bdev_delay_delete", 00:04:43.301 "bdev_delay_create", 00:04:43.301 "bdev_delay_update_latency", 00:04:43.301 "bdev_split_delete", 00:04:43.301 "bdev_split_create", 00:04:43.302 "bdev_error_inject_error", 00:04:43.302 "bdev_error_delete", 00:04:43.302 "bdev_error_create", 00:04:43.302 "bdev_raid_set_options", 00:04:43.302 "bdev_raid_remove_base_bdev", 00:04:43.302 "bdev_raid_add_base_bdev", 00:04:43.302 "bdev_raid_delete", 00:04:43.302 "bdev_raid_create", 00:04:43.302 "bdev_raid_get_bdevs", 00:04:43.302 "bdev_lvol_check_shallow_copy", 00:04:43.302 "bdev_lvol_start_shallow_copy", 00:04:43.302 "bdev_lvol_grow_lvstore", 00:04:43.302 "bdev_lvol_get_lvols", 00:04:43.302 "bdev_lvol_get_lvstores", 00:04:43.302 "bdev_lvol_delete", 00:04:43.302 "bdev_lvol_set_read_only", 00:04:43.302 "bdev_lvol_resize", 00:04:43.302 "bdev_lvol_decouple_parent", 00:04:43.302 "bdev_lvol_inflate", 00:04:43.302 "bdev_lvol_rename", 00:04:43.302 "bdev_lvol_clone_bdev", 00:04:43.302 "bdev_lvol_clone", 00:04:43.302 "bdev_lvol_snapshot", 00:04:43.302 "bdev_lvol_create", 00:04:43.302 "bdev_lvol_delete_lvstore", 00:04:43.302 "bdev_lvol_rename_lvstore", 00:04:43.302 "bdev_lvol_create_lvstore", 00:04:43.302 "bdev_passthru_delete", 00:04:43.302 "bdev_passthru_create", 00:04:43.302 "bdev_nvme_cuse_unregister", 00:04:43.302 "bdev_nvme_cuse_register", 00:04:43.302 "bdev_opal_new_user", 00:04:43.302 "bdev_opal_set_lock_state", 00:04:43.302 "bdev_opal_delete", 00:04:43.302 "bdev_opal_get_info", 00:04:43.302 "bdev_opal_create", 00:04:43.302 "bdev_nvme_opal_revert", 00:04:43.302 "bdev_nvme_opal_init", 00:04:43.302 "bdev_nvme_send_cmd", 00:04:43.302 "bdev_nvme_get_path_iostat", 00:04:43.302 "bdev_nvme_get_mdns_discovery_info", 00:04:43.302 "bdev_nvme_stop_mdns_discovery", 00:04:43.302 "bdev_nvme_start_mdns_discovery", 00:04:43.302 "bdev_nvme_set_multipath_policy", 00:04:43.302 "bdev_nvme_set_preferred_path", 00:04:43.302 "bdev_nvme_get_io_paths", 00:04:43.302 "bdev_nvme_remove_error_injection", 00:04:43.302 "bdev_nvme_add_error_injection", 00:04:43.302 "bdev_nvme_get_discovery_info", 00:04:43.302 "bdev_nvme_stop_discovery", 00:04:43.302 "bdev_nvme_start_discovery", 00:04:43.302 "bdev_nvme_get_controller_health_info", 00:04:43.302 "bdev_nvme_disable_controller", 00:04:43.302 "bdev_nvme_enable_controller", 00:04:43.302 "bdev_nvme_reset_controller", 00:04:43.302 "bdev_nvme_get_transport_statistics", 00:04:43.302 "bdev_nvme_apply_firmware", 00:04:43.302 "bdev_nvme_detach_controller", 00:04:43.302 "bdev_nvme_get_controllers", 00:04:43.302 "bdev_nvme_attach_controller", 00:04:43.302 "bdev_nvme_set_hotplug", 00:04:43.302 "bdev_nvme_set_options", 00:04:43.302 "bdev_null_resize", 00:04:43.302 "bdev_null_delete", 00:04:43.302 "bdev_null_create", 00:04:43.302 "bdev_malloc_delete", 00:04:43.302 "bdev_malloc_create" 00:04:43.302 ] 00:04:43.302 10:56:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.302 10:56:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.302 10:56:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1382011 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 1382011 ']' 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 1382011 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1382011 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1382011' 00:04:43.302 killing process with pid 1382011 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 1382011 00:04:43.302 10:56:40 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 1382011 00:04:43.560 00:04:43.560 real 0m1.529s 00:04:43.560 user 0m2.789s 00:04:43.560 sys 0m0.496s 00:04:43.560 10:56:40 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:43.560 10:56:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.560 ************************************ 00:04:43.560 END TEST spdkcli_tcp 00:04:43.560 ************************************ 00:04:43.817 10:56:40 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.817 10:56:40 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:43.817 10:56:40 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:43.817 10:56:40 -- common/autotest_common.sh@10 -- # set +x 00:04:43.817 ************************************ 00:04:43.817 START TEST dpdk_mem_utility 00:04:43.817 ************************************ 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.817 * Looking for test storage... 00:04:43.817 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:43.817 10:56:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.817 10:56:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1382349 00:04:43.817 10:56:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1382349 00:04:43.817 10:56:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 1382349 ']' 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:43.817 10:56:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.817 [2024-05-15 10:56:41.011870] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:43.818 [2024-05-15 10:56:41.011941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382349 ] 00:04:43.818 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.818 [2024-05-15 10:56:41.080878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.076 [2024-05-15 10:56:41.159154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.642 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:44.642 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:04:44.642 10:56:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.642 10:56:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.642 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:44.642 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.642 { 00:04:44.642 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.642 } 00:04:44.642 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:44.642 10:56:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.642 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:44.642 1 heaps totaling size 814.000000 MiB 00:04:44.642 size: 814.000000 MiB heap id: 0 00:04:44.642 end heaps---------- 00:04:44.642 8 mempools totaling size 598.116089 MiB 00:04:44.642 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.642 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.642 size: 84.521057 MiB name: bdev_io_1382349 00:04:44.642 size: 51.011292 MiB name: evtpool_1382349 00:04:44.642 size: 50.003479 MiB name: msgpool_1382349 00:04:44.642 size: 21.763794 MiB name: PDU_Pool 00:04:44.642 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.642 size: 0.026123 MiB name: Session_Pool 00:04:44.642 end mempools------- 00:04:44.642 6 memzones totaling size 4.142822 MiB 00:04:44.642 size: 1.000366 MiB name: RG_ring_0_1382349 00:04:44.642 size: 1.000366 MiB name: RG_ring_1_1382349 00:04:44.642 size: 1.000366 MiB name: RG_ring_4_1382349 00:04:44.642 size: 1.000366 MiB name: RG_ring_5_1382349 00:04:44.642 size: 0.125366 MiB name: RG_ring_2_1382349 00:04:44.642 size: 0.015991 MiB name: RG_ring_3_1382349 00:04:44.642 end memzones------- 00:04:44.642 10:56:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.901 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:44.901 list of free elements. size: 12.519348 MiB 00:04:44.901 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:44.901 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:44.901 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:44.901 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:44.901 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:44.901 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:44.901 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:44.901 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:44.901 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:44.901 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:44.901 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:44.901 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:44.901 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:44.901 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:44.901 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:44.901 list of standard malloc elements. size: 199.218079 MiB 00:04:44.901 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:44.901 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:44.901 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:44.901 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:44.901 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:44.901 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:44.901 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:44.901 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:44.901 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:44.901 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:44.901 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:44.901 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:44.901 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:44.901 list of memzone associated elements. size: 602.262573 MiB 00:04:44.901 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:44.901 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.901 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:44.901 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.901 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:44.901 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1382349_0 00:04:44.901 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:44.901 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1382349_0 00:04:44.901 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:44.901 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1382349_0 00:04:44.901 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:44.901 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.901 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:44.901 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.901 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:44.901 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1382349 00:04:44.901 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:44.901 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1382349 00:04:44.901 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:44.901 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1382349 00:04:44.902 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:44.902 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.902 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:44.902 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.902 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:44.902 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.902 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:44.902 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.902 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:44.902 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1382349 00:04:44.902 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:44.902 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1382349 00:04:44.902 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:44.902 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1382349 00:04:44.902 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:44.902 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1382349 00:04:44.902 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:44.902 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1382349 00:04:44.902 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:44.902 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.902 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:44.902 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.902 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:44.902 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.902 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:44.902 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1382349 00:04:44.902 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:44.902 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.902 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:44.902 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.902 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:44.902 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1382349 00:04:44.902 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:44.902 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.902 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:44.902 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1382349 00:04:44.902 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:44.902 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1382349 00:04:44.902 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:44.902 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.902 10:56:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.902 10:56:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1382349 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 1382349 ']' 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 1382349 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1382349 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1382349' 00:04:44.902 killing process with pid 1382349 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 1382349 00:04:44.902 10:56:41 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 1382349 00:04:45.162 00:04:45.162 real 0m1.406s 00:04:45.162 user 0m1.422s 00:04:45.162 sys 0m0.442s 00:04:45.162 10:56:42 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.162 10:56:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.162 ************************************ 00:04:45.162 END TEST dpdk_mem_utility 00:04:45.162 ************************************ 00:04:45.162 10:56:42 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:45.162 10:56:42 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.162 10:56:42 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.162 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:04:45.162 ************************************ 00:04:45.162 START TEST event 00:04:45.162 ************************************ 00:04:45.162 10:56:42 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:45.422 * Looking for test storage... 00:04:45.422 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:45.422 10:56:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.422 10:56:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.422 10:56:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.422 10:56:42 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:04:45.422 10:56:42 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.422 10:56:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.422 ************************************ 00:04:45.422 START TEST event_perf 00:04:45.422 ************************************ 00:04:45.422 10:56:42 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.422 Running I/O for 1 seconds...[2024-05-15 10:56:42.539628] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:45.422 [2024-05-15 10:56:42.539710] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382677 ] 00:04:45.422 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.422 [2024-05-15 10:56:42.610922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.682 [2024-05-15 10:56:42.689881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.682 [2024-05-15 10:56:42.689975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.682 [2024-05-15 10:56:42.690193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.682 [2024-05-15 10:56:42.690195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.618 Running I/O for 1 seconds... 00:04:46.618 lcore 0: 201765 00:04:46.619 lcore 1: 201764 00:04:46.619 lcore 2: 201765 00:04:46.619 lcore 3: 201765 00:04:46.619 done. 00:04:46.619 00:04:46.619 real 0m1.234s 00:04:46.619 user 0m4.138s 00:04:46.619 sys 0m0.092s 00:04:46.619 10:56:43 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.619 10:56:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.619 ************************************ 00:04:46.619 END TEST event_perf 00:04:46.619 ************************************ 00:04:46.619 10:56:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.619 10:56:43 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:46.619 10:56:43 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.619 10:56:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.619 ************************************ 00:04:46.619 START TEST event_reactor 00:04:46.619 ************************************ 00:04:46.619 10:56:43 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.619 [2024-05-15 10:56:43.857558] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:46.619 [2024-05-15 10:56:43.857637] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382969 ] 00:04:46.877 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.878 [2024-05-15 10:56:43.928775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.878 [2024-05-15 10:56:43.999608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.815 test_start 00:04:47.815 oneshot 00:04:47.815 tick 100 00:04:47.815 tick 100 00:04:47.815 tick 250 00:04:47.815 tick 100 00:04:47.815 tick 100 00:04:47.815 tick 100 00:04:47.815 tick 500 00:04:47.815 tick 250 00:04:47.815 tick 100 00:04:47.815 tick 100 00:04:47.815 tick 250 00:04:47.815 tick 100 00:04:47.815 tick 100 00:04:47.815 test_end 00:04:47.815 00:04:47.815 real 0m1.222s 00:04:47.815 user 0m1.131s 00:04:47.815 sys 0m0.086s 00:04:47.815 10:56:45 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.815 10:56:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:47.815 ************************************ 00:04:47.815 END TEST event_reactor 00:04:47.815 ************************************ 00:04:48.075 10:56:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.075 10:56:45 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:48.075 10:56:45 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:48.075 10:56:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.075 ************************************ 00:04:48.075 START TEST event_reactor_perf 00:04:48.075 ************************************ 00:04:48.075 10:56:45 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.075 [2024-05-15 10:56:45.162096] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:48.075 [2024-05-15 10:56:45.162176] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383249 ] 00:04:48.075 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.075 [2024-05-15 10:56:45.232293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.075 [2024-05-15 10:56:45.302603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.453 test_start 00:04:49.453 test_end 00:04:49.453 Performance: 980421 events per second 00:04:49.453 00:04:49.453 real 0m1.220s 00:04:49.453 user 0m1.134s 00:04:49.453 sys 0m0.082s 00:04:49.453 10:56:46 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:49.453 10:56:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.453 ************************************ 00:04:49.453 END TEST event_reactor_perf 00:04:49.453 ************************************ 00:04:49.453 10:56:46 event -- event/event.sh@49 -- # uname -s 00:04:49.453 10:56:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.453 10:56:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.453 10:56:46 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:49.453 10:56:46 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:49.453 10:56:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.453 ************************************ 00:04:49.453 START TEST event_scheduler 00:04:49.453 ************************************ 00:04:49.453 10:56:46 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.453 * Looking for test storage... 00:04:49.453 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:49.453 10:56:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.453 10:56:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1383555 00:04:49.453 10:56:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.453 10:56:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.453 10:56:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1383555 00:04:49.454 10:56:46 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 1383555 ']' 00:04:49.454 10:56:46 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.454 10:56:46 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:49.454 10:56:46 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.454 10:56:46 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:49.454 10:56:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.454 [2024-05-15 10:56:46.582798] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:49.454 [2024-05-15 10:56:46.582888] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383555 ] 00:04:49.454 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.454 [2024-05-15 10:56:46.650496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.712 [2024-05-15 10:56:46.726565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.712 [2024-05-15 10:56:46.726647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.712 [2024-05-15 10:56:46.726734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.712 [2024-05-15 10:56:46.726737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:04:50.281 10:56:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.281 POWER: Env isn't set yet! 00:04:50.281 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:50.281 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.281 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.281 POWER: Attempting to initialise PSTAT power management... 00:04:50.281 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:50.281 POWER: Initialized successfully for lcore 0 power management 00:04:50.281 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:50.281 POWER: Initialized successfully for lcore 1 power management 00:04:50.281 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:50.281 POWER: Initialized successfully for lcore 2 power management 00:04:50.281 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:50.281 POWER: Initialized successfully for lcore 3 power management 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.281 10:56:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.281 [2024-05-15 10:56:47.535947] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.281 10:56:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:50.281 10:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 ************************************ 00:04:50.541 START TEST scheduler_create_thread 00:04:50.541 ************************************ 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 2 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 3 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 4 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 5 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 6 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 7 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 8 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 9 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 10 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:50.541 10:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 10:56:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:51.478 10:56:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.478 10:56:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:51.478 10:56:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.853 10:56:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.853 10:56:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.853 10:56:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.853 10:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.853 10:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.787 10:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.787 00:04:53.787 real 0m3.381s 00:04:53.787 user 0m0.024s 00:04:53.787 sys 0m0.006s 00:04:53.787 10:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.787 10:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.787 ************************************ 00:04:53.787 END TEST scheduler_create_thread 00:04:53.787 ************************************ 00:04:53.787 10:56:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.787 10:56:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1383555 00:04:53.787 10:56:51 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 1383555 ']' 00:04:53.787 10:56:51 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 1383555 00:04:53.787 10:56:51 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:04:53.787 10:56:51 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:53.788 10:56:51 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1383555 00:04:54.047 10:56:51 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:04:54.047 10:56:51 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:04:54.047 10:56:51 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1383555' 00:04:54.047 killing process with pid 1383555 00:04:54.047 10:56:51 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 1383555 00:04:54.047 10:56:51 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 1383555 00:04:54.306 [2024-05-15 10:56:51.344197] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.306 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:54.306 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:54.306 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:54.306 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:54.306 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:54.306 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:54.306 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:54.306 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:54.306 00:04:54.306 real 0m5.111s 00:04:54.306 user 0m10.554s 00:04:54.306 sys 0m0.453s 00:04:54.306 10:56:51 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.306 10:56:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.306 ************************************ 00:04:54.306 END TEST event_scheduler 00:04:54.306 ************************************ 00:04:54.565 10:56:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.565 10:56:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.565 10:56:51 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:54.565 10:56:51 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:54.565 10:56:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.565 ************************************ 00:04:54.565 START TEST app_repeat 00:04:54.565 ************************************ 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1384417 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1384417' 00:04:54.565 Process app_repeat pid: 1384417 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:54.565 spdk_app_start Round 0 00:04:54.565 10:56:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1384417 /var/tmp/spdk-nbd.sock 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1384417 ']' 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:54.565 10:56:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.565 [2024-05-15 10:56:51.692583] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:04:54.565 [2024-05-15 10:56:51.692663] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384417 ] 00:04:54.565 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.565 [2024-05-15 10:56:51.764626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.824 [2024-05-15 10:56:51.844601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.824 [2024-05-15 10:56:51.844605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.393 10:56:52 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:55.393 10:56:52 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:04:55.393 10:56:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.652 Malloc0 00:04:55.652 10:56:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.652 Malloc1 00:04:55.965 10:56:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.965 10:56:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.965 /dev/nbd0 00:04:55.965 10:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.965 10:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.965 1+0 records in 00:04:55.965 1+0 records out 00:04:55.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253761 s, 16.1 MB/s 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:55.965 10:56:53 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:55.965 10:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.965 10:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.965 10:56:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.248 /dev/nbd1 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.248 1+0 records in 00:04:56.248 1+0 records out 00:04:56.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239895 s, 17.1 MB/s 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:04:56.248 10:56:53 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.248 10:56:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.507 10:56:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.507 { 00:04:56.507 "nbd_device": "/dev/nbd0", 00:04:56.507 "bdev_name": "Malloc0" 00:04:56.507 }, 00:04:56.507 { 00:04:56.507 "nbd_device": "/dev/nbd1", 00:04:56.507 "bdev_name": "Malloc1" 00:04:56.507 } 00:04:56.507 ]' 00:04:56.507 10:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.507 { 00:04:56.507 "nbd_device": "/dev/nbd0", 00:04:56.507 "bdev_name": "Malloc0" 00:04:56.507 }, 00:04:56.507 { 00:04:56.507 "nbd_device": "/dev/nbd1", 00:04:56.507 "bdev_name": "Malloc1" 00:04:56.507 } 00:04:56.507 ]' 00:04:56.507 10:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.507 10:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.507 /dev/nbd1' 00:04:56.507 10:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.507 /dev/nbd1' 00:04:56.507 10:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.508 256+0 records in 00:04:56.508 256+0 records out 00:04:56.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105832 s, 99.1 MB/s 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.508 256+0 records in 00:04:56.508 256+0 records out 00:04:56.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204488 s, 51.3 MB/s 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.508 256+0 records in 00:04:56.508 256+0 records out 00:04:56.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213761 s, 49.1 MB/s 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.508 10:56:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.766 10:56:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.025 10:56:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.284 10:56:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.284 10:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.284 10:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.284 10:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.284 10:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.285 10:56:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.285 10:56:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.285 10:56:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.285 10:56:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.285 10:56:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.285 10:56:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.544 [2024-05-15 10:56:54.666022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.544 [2024-05-15 10:56:54.731370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.544 [2024-05-15 10:56:54.731374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.544 [2024-05-15 10:56:54.771775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.544 [2024-05-15 10:56:54.771816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.834 10:56:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.834 10:56:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.834 spdk_app_start Round 1 00:05:00.834 10:56:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1384417 /var/tmp/spdk-nbd.sock 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1384417 ']' 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:00.834 10:56:57 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:00.834 10:56:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.834 Malloc0 00:05:00.834 10:56:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.834 Malloc1 00:05:00.835 10:56:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.835 10:56:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.094 /dev/nbd0 00:05:01.094 10:56:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.094 10:56:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.094 1+0 records in 00:05:01.094 1+0 records out 00:05:01.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210182 s, 19.5 MB/s 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:01.094 10:56:58 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:01.094 10:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.094 10:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.094 10:56:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.353 /dev/nbd1 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.353 1+0 records in 00:05:01.353 1+0 records out 00:05:01.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267791 s, 15.3 MB/s 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:01.353 10:56:58 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.353 { 00:05:01.353 "nbd_device": "/dev/nbd0", 00:05:01.353 "bdev_name": "Malloc0" 00:05:01.353 }, 00:05:01.353 { 00:05:01.353 "nbd_device": "/dev/nbd1", 00:05:01.353 "bdev_name": "Malloc1" 00:05:01.353 } 00:05:01.353 ]' 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.353 { 00:05:01.353 "nbd_device": "/dev/nbd0", 00:05:01.353 "bdev_name": "Malloc0" 00:05:01.353 }, 00:05:01.353 { 00:05:01.353 "nbd_device": "/dev/nbd1", 00:05:01.353 "bdev_name": "Malloc1" 00:05:01.353 } 00:05:01.353 ]' 00:05:01.353 10:56:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.612 /dev/nbd1' 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.612 /dev/nbd1' 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.612 256+0 records in 00:05:01.612 256+0 records out 00:05:01.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107202 s, 97.8 MB/s 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.612 256+0 records in 00:05:01.612 256+0 records out 00:05:01.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203623 s, 51.5 MB/s 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.612 256+0 records in 00:05:01.612 256+0 records out 00:05:01.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02179 s, 48.1 MB/s 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.612 10:56:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.613 10:56:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.872 10:56:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.872 10:56:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.131 10:56:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.131 10:56:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.390 10:56:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.650 [2024-05-15 10:56:59.685171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.650 [2024-05-15 10:56:59.752338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.650 [2024-05-15 10:56:59.752341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.650 [2024-05-15 10:56:59.794346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.650 [2024-05-15 10:56:59.794392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.942 10:57:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.942 10:57:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.942 spdk_app_start Round 2 00:05:05.942 10:57:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1384417 /var/tmp/spdk-nbd.sock 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1384417 ']' 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:05.942 10:57:02 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:05.942 10:57:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.942 Malloc0 00:05:05.942 10:57:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.942 Malloc1 00:05:05.942 10:57:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.942 10:57:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.203 /dev/nbd0 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.203 1+0 records in 00:05:06.203 1+0 records out 00:05:06.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273998 s, 14.9 MB/s 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.203 /dev/nbd1 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.203 10:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.203 1+0 records in 00:05:06.203 1+0 records out 00:05:06.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242625 s, 16.9 MB/s 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:06.203 10:57:03 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:06.204 10:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.204 10:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.204 10:57:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.204 10:57:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.464 10:57:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.464 10:57:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.464 { 00:05:06.465 "nbd_device": "/dev/nbd0", 00:05:06.465 "bdev_name": "Malloc0" 00:05:06.465 }, 00:05:06.465 { 00:05:06.465 "nbd_device": "/dev/nbd1", 00:05:06.465 "bdev_name": "Malloc1" 00:05:06.465 } 00:05:06.465 ]' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.465 { 00:05:06.465 "nbd_device": "/dev/nbd0", 00:05:06.465 "bdev_name": "Malloc0" 00:05:06.465 }, 00:05:06.465 { 00:05:06.465 "nbd_device": "/dev/nbd1", 00:05:06.465 "bdev_name": "Malloc1" 00:05:06.465 } 00:05:06.465 ]' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.465 /dev/nbd1' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.465 /dev/nbd1' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.465 256+0 records in 00:05:06.465 256+0 records out 00:05:06.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112752 s, 93.0 MB/s 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.465 10:57:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.724 256+0 records in 00:05:06.724 256+0 records out 00:05:06.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203334 s, 51.6 MB/s 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.724 256+0 records in 00:05:06.724 256+0 records out 00:05:06.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213828 s, 49.0 MB/s 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.724 10:57:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.983 10:57:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.242 10:57:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.242 10:57:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.502 10:57:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.760 [2024-05-15 10:57:04.778281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.760 [2024-05-15 10:57:04.844491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.760 [2024-05-15 10:57:04.844494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.760 [2024-05-15 10:57:04.885544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.760 [2024-05-15 10:57:04.885590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.051 10:57:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1384417 /var/tmp/spdk-nbd.sock 00:05:11.051 10:57:07 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1384417 ']' 00:05:11.051 10:57:07 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.051 10:57:07 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:11.051 10:57:07 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.051 10:57:07 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:11.052 10:57:07 event.app_repeat -- event/event.sh@39 -- # killprocess 1384417 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 1384417 ']' 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 1384417 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1384417 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1384417' 00:05:11.052 killing process with pid 1384417 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@966 -- # kill 1384417 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@971 -- # wait 1384417 00:05:11.052 spdk_app_start is called in Round 0. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 reinitialization... 00:05:11.052 spdk_app_start is called in Round 1. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 reinitialization... 00:05:11.052 spdk_app_start is called in Round 2. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 reinitialization... 00:05:11.052 spdk_app_start is called in Round 3. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 10:57:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.052 10:57:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.052 00:05:11.052 real 0m16.318s 00:05:11.052 user 0m34.527s 00:05:11.052 sys 0m3.183s 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:11.052 10:57:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 ************************************ 00:05:11.052 END TEST app_repeat 00:05:11.052 ************************************ 00:05:11.052 10:57:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.052 10:57:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.052 10:57:08 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:11.052 10:57:08 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:11.052 10:57:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 ************************************ 00:05:11.052 START TEST cpu_locks 00:05:11.052 ************************************ 00:05:11.052 10:57:08 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.052 * Looking for test storage... 00:05:11.052 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:11.052 10:57:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.052 10:57:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.052 10:57:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.052 10:57:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.052 10:57:08 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:11.052 10:57:08 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:11.052 10:57:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 ************************************ 00:05:11.052 START TEST default_locks 00:05:11.052 ************************************ 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1388137 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1388137 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 1388137 ']' 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:11.052 10:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 [2024-05-15 10:57:08.240532] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:11.052 [2024-05-15 10:57:08.240615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388137 ] 00:05:11.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.052 [2024-05-15 10:57:08.307682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.311 [2024-05-15 10:57:08.385828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.880 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:11.880 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:11.880 10:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1388137 00:05:11.880 10:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1388137 00:05:11.880 10:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.448 lslocks: write error 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1388137 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 1388137 ']' 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 1388137 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1388137 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:12.448 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1388137' 00:05:12.448 killing process with pid 1388137 00:05:12.708 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 1388137 00:05:12.708 10:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 1388137 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1388137 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1388137 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:12.967 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1388137 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 1388137 ']' 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1388137) - No such process 00:05:12.968 ERROR: process (pid: 1388137) is no longer running 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.968 00:05:12.968 real 0m1.813s 00:05:12.968 user 0m1.906s 00:05:12.968 sys 0m0.632s 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:12.968 10:57:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 ************************************ 00:05:12.968 END TEST default_locks 00:05:12.968 ************************************ 00:05:12.968 10:57:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.968 10:57:10 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:12.968 10:57:10 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:12.968 10:57:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 ************************************ 00:05:12.968 START TEST default_locks_via_rpc 00:05:12.968 ************************************ 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1388435 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1388435 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1388435 ']' 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:12.968 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 [2024-05-15 10:57:10.142108] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:12.968 [2024-05-15 10:57:10.142169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388435 ] 00:05:12.968 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.968 [2024-05-15 10:57:10.212140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.227 [2024-05-15 10:57:10.286890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1388435 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1388435 00:05:13.795 10:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1388435 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 1388435 ']' 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 1388435 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1388435 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1388435' 00:05:14.364 killing process with pid 1388435 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 1388435 00:05:14.364 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 1388435 00:05:14.624 00:05:14.624 real 0m1.712s 00:05:14.624 user 0m1.792s 00:05:14.624 sys 0m0.601s 00:05:14.624 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:14.624 10:57:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.624 ************************************ 00:05:14.624 END TEST default_locks_via_rpc 00:05:14.624 ************************************ 00:05:14.624 10:57:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.624 10:57:11 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:14.624 10:57:11 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:14.624 10:57:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.884 ************************************ 00:05:14.884 START TEST non_locking_app_on_locked_coremask 00:05:14.884 ************************************ 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1388739 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1388739 /var/tmp/spdk.sock 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1388739 ']' 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:14.884 10:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.884 [2024-05-15 10:57:11.939529] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:14.884 [2024-05-15 10:57:11.939610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388739 ] 00:05:14.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.884 [2024-05-15 10:57:12.009054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.884 [2024-05-15 10:57:12.087156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1388999 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1388999 /var/tmp/spdk2.sock 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1388999 ']' 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:15.824 10:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 [2024-05-15 10:57:12.779624] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:15.824 [2024-05-15 10:57:12.779710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388999 ] 00:05:15.824 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.824 [2024-05-15 10:57:12.870214] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.824 [2024-05-15 10:57:12.870234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.824 [2024-05-15 10:57:13.018482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.395 10:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:16.395 10:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:16.395 10:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1388739 00:05:16.395 10:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1388739 00:05:16.395 10:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.775 lslocks: write error 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1388739 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1388739 ']' 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1388739 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1388739 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1388739' 00:05:17.775 killing process with pid 1388739 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1388739 00:05:17.775 10:57:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1388739 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1388999 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1388999 ']' 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1388999 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1388999 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1388999' 00:05:18.344 killing process with pid 1388999 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1388999 00:05:18.344 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1388999 00:05:18.604 00:05:18.604 real 0m3.910s 00:05:18.604 user 0m4.156s 00:05:18.604 sys 0m1.233s 00:05:18.604 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.604 10:57:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.604 ************************************ 00:05:18.604 END TEST non_locking_app_on_locked_coremask 00:05:18.604 ************************************ 00:05:18.604 10:57:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.604 10:57:15 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:18.604 10:57:15 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.604 10:57:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.864 ************************************ 00:05:18.864 START TEST locking_app_on_unlocked_coremask 00:05:18.864 ************************************ 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1389571 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1389571 /var/tmp/spdk.sock 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1389571 ']' 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:18.864 10:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.864 [2024-05-15 10:57:15.940578] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:18.864 [2024-05-15 10:57:15.940642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389571 ] 00:05:18.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.864 [2024-05-15 10:57:16.008938] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.864 [2024-05-15 10:57:16.008965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.864 [2024-05-15 10:57:16.078652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1389617 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1389617 /var/tmp/spdk2.sock 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1389617 ']' 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:19.802 10:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.802 [2024-05-15 10:57:16.767591] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:19.802 [2024-05-15 10:57:16.767642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389617 ] 00:05:19.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.802 [2024-05-15 10:57:16.859497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.802 [2024-05-15 10:57:17.002840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.370 10:57:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:20.370 10:57:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:20.370 10:57:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1389617 00:05:20.370 10:57:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1389617 00:05:20.370 10:57:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.311 lslocks: write error 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1389571 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1389571 ']' 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 1389571 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1389571 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1389571' 00:05:21.311 killing process with pid 1389571 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 1389571 00:05:21.311 10:57:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 1389571 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1389617 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1389617 ']' 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 1389617 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1389617 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1389617' 00:05:21.879 killing process with pid 1389617 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 1389617 00:05:21.879 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 1389617 00:05:22.138 00:05:22.138 real 0m3.457s 00:05:22.138 user 0m3.660s 00:05:22.138 sys 0m1.130s 00:05:22.138 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.138 10:57:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.138 ************************************ 00:05:22.138 END TEST locking_app_on_unlocked_coremask 00:05:22.138 ************************************ 00:05:22.397 10:57:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:22.397 10:57:19 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.397 10:57:19 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.397 10:57:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.397 ************************************ 00:05:22.397 START TEST locking_app_on_locked_coremask 00:05:22.397 ************************************ 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1390153 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1390153 /var/tmp/spdk.sock 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1390153 ']' 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:22.397 10:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.397 [2024-05-15 10:57:19.465179] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:22.397 [2024-05-15 10:57:19.465234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390153 ] 00:05:22.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.397 [2024-05-15 10:57:19.533112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.397 [2024-05-15 10:57:19.611115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1390402 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1390402 /var/tmp/spdk2.sock 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1390402 /var/tmp/spdk2.sock 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1390402 /var/tmp/spdk2.sock 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1390402 ']' 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:23.335 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.335 [2024-05-15 10:57:20.305513] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:23.335 [2024-05-15 10:57:20.305577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390402 ] 00:05:23.335 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.335 [2024-05-15 10:57:20.402364] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1390153 has claimed it. 00:05:23.335 [2024-05-15 10:57:20.402404] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.903 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1390402) - No such process 00:05:23.903 ERROR: process (pid: 1390402) is no longer running 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1390153 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1390153 00:05:23.903 10:57:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.484 lslocks: write error 00:05:24.484 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1390153 00:05:24.484 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1390153 ']' 00:05:24.484 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1390153 00:05:24.484 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:24.484 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:24.484 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1390153 00:05:24.485 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:24.485 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:24.485 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1390153' 00:05:24.485 killing process with pid 1390153 00:05:24.485 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1390153 00:05:24.485 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1390153 00:05:24.804 00:05:24.804 real 0m2.368s 00:05:24.804 user 0m2.572s 00:05:24.804 sys 0m0.693s 00:05:24.804 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.804 10:57:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 ************************************ 00:05:24.804 END TEST locking_app_on_locked_coremask 00:05:24.804 ************************************ 00:05:24.804 10:57:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:24.804 10:57:21 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:24.804 10:57:21 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.804 10:57:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 ************************************ 00:05:24.804 START TEST locking_overlapped_coremask 00:05:24.804 ************************************ 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1390711 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1390711 /var/tmp/spdk.sock 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 1390711 ']' 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:24.804 10:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 [2024-05-15 10:57:21.917750] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:24.804 [2024-05-15 10:57:21.917818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390711 ] 00:05:24.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.804 [2024-05-15 10:57:21.987606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.064 [2024-05-15 10:57:22.072756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.064 [2024-05-15 10:57:22.072850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.064 [2024-05-15 10:57:22.072851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1390735 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1390735 /var/tmp/spdk2.sock 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:25.635 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1390735 /var/tmp/spdk2.sock 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1390735 /var/tmp/spdk2.sock 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 1390735 ']' 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:25.636 10:57:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.636 [2024-05-15 10:57:22.768583] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:25.636 [2024-05-15 10:57:22.768647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390735 ] 00:05:25.636 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.636 [2024-05-15 10:57:22.861928] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1390711 has claimed it. 00:05:25.636 [2024-05-15 10:57:22.861964] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.201 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1390735) - No such process 00:05:26.201 ERROR: process (pid: 1390735) is no longer running 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1390711 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 1390711 ']' 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 1390711 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:26.201 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1390711 00:05:26.460 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:26.460 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:26.460 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1390711' 00:05:26.460 killing process with pid 1390711 00:05:26.460 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 1390711 00:05:26.460 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 1390711 00:05:26.719 00:05:26.719 real 0m1.889s 00:05:26.719 user 0m5.300s 00:05:26.719 sys 0m0.456s 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.719 ************************************ 00:05:26.719 END TEST locking_overlapped_coremask 00:05:26.719 ************************************ 00:05:26.719 10:57:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:26.719 10:57:23 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:26.719 10:57:23 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.719 10:57:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.719 ************************************ 00:05:26.719 START TEST locking_overlapped_coremask_via_rpc 00:05:26.719 ************************************ 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1391025 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1391025 /var/tmp/spdk.sock 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1391025 ']' 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:26.719 10:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.719 [2024-05-15 10:57:23.899646] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:26.719 [2024-05-15 10:57:23.899731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391025 ] 00:05:26.719 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.719 [2024-05-15 10:57:23.970649] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.719 [2024-05-15 10:57:23.970677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.978 [2024-05-15 10:57:24.041907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.978 [2024-05-15 10:57:24.042003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.978 [2024-05-15 10:57:24.042004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1391197 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1391197 /var/tmp/spdk2.sock 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1391197 ']' 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:27.545 10:57:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.545 [2024-05-15 10:57:24.740369] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:27.545 [2024-05-15 10:57:24.740558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391197 ] 00:05:27.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.804 [2024-05-15 10:57:24.836864] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.804 [2024-05-15 10:57:24.836894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.804 [2024-05-15 10:57:24.982992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.804 [2024-05-15 10:57:24.986429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.804 [2024-05-15 10:57:24.986430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.371 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.371 [2024-05-15 10:57:25.594443] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1391025 has claimed it. 00:05:28.371 request: 00:05:28.371 { 00:05:28.371 "method": "framework_enable_cpumask_locks", 00:05:28.371 "req_id": 1 00:05:28.371 } 00:05:28.371 Got JSON-RPC error response 00:05:28.371 response: 00:05:28.372 { 00:05:28.372 "code": -32603, 00:05:28.372 "message": "Failed to claim CPU core: 2" 00:05:28.372 } 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1391025 /var/tmp/spdk.sock 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1391025 ']' 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:28.372 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1391197 /var/tmp/spdk2.sock 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1391197 ']' 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:28.630 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.889 00:05:28.889 real 0m2.088s 00:05:28.889 user 0m0.814s 00:05:28.889 sys 0m0.203s 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:28.889 10:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.889 ************************************ 00:05:28.889 END TEST locking_overlapped_coremask_via_rpc 00:05:28.889 ************************************ 00:05:28.889 10:57:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:28.889 10:57:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1391025 ]] 00:05:28.889 10:57:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1391025 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1391025 ']' 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1391025 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1391025 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1391025' 00:05:28.889 killing process with pid 1391025 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 1391025 00:05:28.889 10:57:26 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 1391025 00:05:29.148 10:57:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1391197 ]] 00:05:29.148 10:57:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1391197 00:05:29.148 10:57:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1391197 ']' 00:05:29.148 10:57:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1391197 00:05:29.148 10:57:26 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:29.148 10:57:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:29.148 10:57:26 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1391197 00:05:29.407 10:57:26 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:29.407 10:57:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:29.407 10:57:26 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1391197' 00:05:29.407 killing process with pid 1391197 00:05:29.407 10:57:26 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 1391197 00:05:29.407 10:57:26 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 1391197 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1391025 ]] 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1391025 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1391025 ']' 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1391025 00:05:29.667 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1391025) - No such process 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 1391025 is not found' 00:05:29.667 Process with pid 1391025 is not found 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1391197 ]] 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1391197 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1391197 ']' 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1391197 00:05:29.667 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1391197) - No such process 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 1391197 is not found' 00:05:29.667 Process with pid 1391197 is not found 00:05:29.667 10:57:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.667 00:05:29.667 real 0m18.683s 00:05:29.667 user 0m30.833s 00:05:29.667 sys 0m5.992s 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:29.667 10:57:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.667 ************************************ 00:05:29.667 END TEST cpu_locks 00:05:29.667 ************************************ 00:05:29.667 00:05:29.667 real 0m44.433s 00:05:29.667 user 1m22.543s 00:05:29.667 sys 0m10.333s 00:05:29.667 10:57:26 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:29.667 10:57:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.667 ************************************ 00:05:29.667 END TEST event 00:05:29.667 ************************************ 00:05:29.667 10:57:26 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:29.667 10:57:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:29.667 10:57:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:29.667 10:57:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.667 ************************************ 00:05:29.667 START TEST thread 00:05:29.667 ************************************ 00:05:29.667 10:57:26 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:29.926 * Looking for test storage... 00:05:29.926 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:29.926 10:57:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.926 10:57:26 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:29.926 10:57:26 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:29.926 10:57:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.926 ************************************ 00:05:29.926 START TEST thread_poller_perf 00:05:29.926 ************************************ 00:05:29.926 10:57:27 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.926 [2024-05-15 10:57:27.037538] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:29.926 [2024-05-15 10:57:27.037635] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391661 ] 00:05:29.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.926 [2024-05-15 10:57:27.108960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.926 [2024-05-15 10:57:27.179982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.926 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:31.304 ====================================== 00:05:31.304 busy:2504554800 (cyc) 00:05:31.304 total_run_count: 864000 00:05:31.304 tsc_hz: 2500000000 (cyc) 00:05:31.304 ====================================== 00:05:31.304 poller_cost: 2898 (cyc), 1159 (nsec) 00:05:31.304 00:05:31.304 real 0m1.226s 00:05:31.304 user 0m1.137s 00:05:31.304 sys 0m0.084s 00:05:31.304 10:57:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:31.304 10:57:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.304 ************************************ 00:05:31.304 END TEST thread_poller_perf 00:05:31.304 ************************************ 00:05:31.304 10:57:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.304 10:57:28 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:31.304 10:57:28 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:31.304 10:57:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.304 ************************************ 00:05:31.304 START TEST thread_poller_perf 00:05:31.304 ************************************ 00:05:31.304 10:57:28 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.304 [2024-05-15 10:57:28.337478] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:31.304 [2024-05-15 10:57:28.337558] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391946 ] 00:05:31.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.304 [2024-05-15 10:57:28.408529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.304 [2024-05-15 10:57:28.478414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.304 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:32.684 ====================================== 00:05:32.684 busy:2501442768 (cyc) 00:05:32.684 total_run_count: 13901000 00:05:32.684 tsc_hz: 2500000000 (cyc) 00:05:32.684 ====================================== 00:05:32.684 poller_cost: 179 (cyc), 71 (nsec) 00:05:32.684 00:05:32.684 real 0m1.225s 00:05:32.684 user 0m1.129s 00:05:32.684 sys 0m0.092s 00:05:32.684 10:57:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:32.684 10:57:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.684 ************************************ 00:05:32.684 END TEST thread_poller_perf 00:05:32.684 ************************************ 00:05:32.684 10:57:29 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:32.684 10:57:29 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:32.684 10:57:29 thread -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:32.684 10:57:29 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:32.684 10:57:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.684 ************************************ 00:05:32.684 START TEST thread_spdk_lock 00:05:32.684 ************************************ 00:05:32.684 10:57:29 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:32.684 [2024-05-15 10:57:29.651181] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:32.684 [2024-05-15 10:57:29.651264] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392234 ] 00:05:32.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.684 [2024-05-15 10:57:29.722677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.684 [2024-05-15 10:57:29.795037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.684 [2024-05-15 10:57:29.795040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.252 [2024-05-15 10:57:30.289982] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:33.252 [2024-05-15 10:57:30.290022] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:33.252 [2024-05-15 10:57:30.290032] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14b75c0 00:05:33.252 [2024-05-15 10:57:30.290977] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:33.252 [2024-05-15 10:57:30.291082] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:33.252 [2024-05-15 10:57:30.291101] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:33.252 Starting test contend 00:05:33.252 Worker Delay Wait us Hold us Total us 00:05:33.252 0 3 164151 189131 353282 00:05:33.252 1 5 85959 289509 375469 00:05:33.252 PASS test contend 00:05:33.252 Starting test hold_by_poller 00:05:33.252 PASS test hold_by_poller 00:05:33.252 Starting test hold_by_message 00:05:33.252 PASS test hold_by_message 00:05:33.252 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:33.252 100014 assertions passed 00:05:33.252 0 assertions failed 00:05:33.252 00:05:33.252 real 0m0.722s 00:05:33.252 user 0m1.129s 00:05:33.252 sys 0m0.086s 00:05:33.252 10:57:30 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:33.252 10:57:30 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:33.252 ************************************ 00:05:33.252 END TEST thread_spdk_lock 00:05:33.252 ************************************ 00:05:33.252 00:05:33.252 real 0m3.527s 00:05:33.252 user 0m3.503s 00:05:33.252 sys 0m0.525s 00:05:33.252 10:57:30 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:33.252 10:57:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.252 ************************************ 00:05:33.252 END TEST thread 00:05:33.252 ************************************ 00:05:33.252 10:57:30 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:33.252 10:57:30 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:33.252 10:57:30 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:33.252 10:57:30 -- common/autotest_common.sh@10 -- # set +x 00:05:33.252 ************************************ 00:05:33.252 START TEST accel 00:05:33.252 ************************************ 00:05:33.252 10:57:30 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:33.511 * Looking for test storage... 00:05:33.511 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:33.511 10:57:30 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:33.511 10:57:30 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:33.511 10:57:30 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.511 10:57:30 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1392328 00:05:33.511 10:57:30 accel -- accel/accel.sh@63 -- # waitforlisten 1392328 00:05:33.511 10:57:30 accel -- common/autotest_common.sh@828 -- # '[' -z 1392328 ']' 00:05:33.511 10:57:30 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.511 10:57:30 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:33.511 10:57:30 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:33.511 10:57:30 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:33.511 10:57:30 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.511 10:57:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.511 10:57:30 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:33.512 10:57:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.512 10:57:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.512 10:57:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.512 10:57:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.512 10:57:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.512 10:57:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:33.512 10:57:30 accel -- accel/accel.sh@41 -- # jq -r . 00:05:33.512 [2024-05-15 10:57:30.630050] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:33.512 [2024-05-15 10:57:30.630110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392328 ] 00:05:33.512 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.512 [2024-05-15 10:57:30.698743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.770 [2024-05-15 10:57:30.780435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.339 10:57:31 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:34.339 10:57:31 accel -- common/autotest_common.sh@861 -- # return 0 00:05:34.339 10:57:31 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:34.339 10:57:31 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:34.339 10:57:31 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:34.339 10:57:31 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:34.339 10:57:31 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:34.339 10:57:31 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:34.339 10:57:31 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.339 10:57:31 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:34.339 10:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.339 10:57:31 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.339 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.339 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.339 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # IFS== 00:05:34.340 10:57:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:34.340 10:57:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:34.340 10:57:31 accel -- accel/accel.sh@75 -- # killprocess 1392328 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@947 -- # '[' -z 1392328 ']' 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@951 -- # kill -0 1392328 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@952 -- # uname 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1392328 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1392328' 00:05:34.340 killing process with pid 1392328 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@966 -- # kill 1392328 00:05:34.340 10:57:31 accel -- common/autotest_common.sh@971 -- # wait 1392328 00:05:34.599 10:57:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:34.599 10:57:31 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:34.599 10:57:31 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:34.599 10:57:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:34.599 10:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 10:57:31 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:34.859 10:57:31 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:34.859 10:57:31 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.859 10:57:31 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 10:57:31 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:34.859 10:57:31 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:34.859 10:57:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:34.859 10:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 ************************************ 00:05:34.859 START TEST accel_missing_filename 00:05:34.859 ************************************ 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.859 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:34.859 10:57:32 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:34.859 [2024-05-15 10:57:32.035653] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:34.859 [2024-05-15 10:57:32.035750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392611 ] 00:05:34.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.859 [2024-05-15 10:57:32.108425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.118 [2024-05-15 10:57:32.185892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.118 [2024-05-15 10:57:32.225759] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.118 [2024-05-15 10:57:32.286049] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:35.118 A filename is required. 00:05:35.118 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:35.118 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.118 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:35.118 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:35.118 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:35.118 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.118 00:05:35.118 real 0m0.343s 00:05:35.118 user 0m0.237s 00:05:35.118 sys 0m0.142s 00:05:35.119 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.119 10:57:32 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:35.119 ************************************ 00:05:35.119 END TEST accel_missing_filename 00:05:35.119 ************************************ 00:05:35.378 10:57:32 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:35.378 10:57:32 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:35.378 10:57:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.378 10:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.378 ************************************ 00:05:35.378 START TEST accel_compress_verify 00:05:35.378 ************************************ 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.378 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:35.378 10:57:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:35.378 10:57:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:35.378 10:57:32 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.378 10:57:32 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.378 10:57:32 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.379 10:57:32 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.379 10:57:32 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.379 10:57:32 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:35.379 10:57:32 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:35.379 [2024-05-15 10:57:32.460889] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:35.379 [2024-05-15 10:57:32.460968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392801 ] 00:05:35.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.379 [2024-05-15 10:57:32.533819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.379 [2024-05-15 10:57:32.605930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.638 [2024-05-15 10:57:32.646035] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.638 [2024-05-15 10:57:32.706170] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:35.638 00:05:35.638 Compression does not support the verify option, aborting. 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.638 00:05:35.638 real 0m0.337s 00:05:35.638 user 0m0.239s 00:05:35.638 sys 0m0.138s 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.638 10:57:32 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:35.638 ************************************ 00:05:35.638 END TEST accel_compress_verify 00:05:35.638 ************************************ 00:05:35.638 10:57:32 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:35.638 10:57:32 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:35.638 10:57:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.638 10:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.638 ************************************ 00:05:35.638 START TEST accel_wrong_workload 00:05:35.638 ************************************ 00:05:35.638 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:05:35.638 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:35.639 10:57:32 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:35.639 Unsupported workload type: foobar 00:05:35.639 [2024-05-15 10:57:32.883112] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:35.639 accel_perf options: 00:05:35.639 [-h help message] 00:05:35.639 [-q queue depth per core] 00:05:35.639 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:35.639 [-T number of threads per core 00:05:35.639 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:35.639 [-t time in seconds] 00:05:35.639 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:35.639 [ dif_verify, , dif_generate, dif_generate_copy 00:05:35.639 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:35.639 [-l for compress/decompress workloads, name of uncompressed input file 00:05:35.639 [-S for crc32c workload, use this seed value (default 0) 00:05:35.639 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:35.639 [-f for fill workload, use this BYTE value (default 255) 00:05:35.639 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:35.639 [-y verify result if this switch is on] 00:05:35.639 [-a tasks to allocate per core (default: same value as -q)] 00:05:35.639 Can be used to spread operations across a wider range of memory. 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.639 00:05:35.639 real 0m0.027s 00:05:35.639 user 0m0.010s 00:05:35.639 sys 0m0.017s 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.639 10:57:32 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:35.639 ************************************ 00:05:35.639 END TEST accel_wrong_workload 00:05:35.639 ************************************ 00:05:35.639 Error: writing output failed: Broken pipe 00:05:35.898 10:57:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:35.898 10:57:32 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:35.898 10:57:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.898 10:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.898 ************************************ 00:05:35.898 START TEST accel_negative_buffers 00:05:35.898 ************************************ 00:05:35.898 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:35.898 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:35.898 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:35.898 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:35.899 10:57:32 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:35.899 -x option must be non-negative. 00:05:35.899 [2024-05-15 10:57:32.993255] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:35.899 accel_perf options: 00:05:35.899 [-h help message] 00:05:35.899 [-q queue depth per core] 00:05:35.899 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:35.899 [-T number of threads per core 00:05:35.899 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:35.899 [-t time in seconds] 00:05:35.899 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:35.899 [ dif_verify, , dif_generate, dif_generate_copy 00:05:35.899 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:35.899 [-l for compress/decompress workloads, name of uncompressed input file 00:05:35.899 [-S for crc32c workload, use this seed value (default 0) 00:05:35.899 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:35.899 [-f for fill workload, use this BYTE value (default 255) 00:05:35.899 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:35.899 [-y verify result if this switch is on] 00:05:35.899 [-a tasks to allocate per core (default: same value as -q)] 00:05:35.899 Can be used to spread operations across a wider range of memory. 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.899 00:05:35.899 real 0m0.029s 00:05:35.899 user 0m0.016s 00:05:35.899 sys 0m0.013s 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.899 10:57:32 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:35.899 ************************************ 00:05:35.899 END TEST accel_negative_buffers 00:05:35.899 ************************************ 00:05:35.899 Error: writing output failed: Broken pipe 00:05:35.899 10:57:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:35.899 10:57:33 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:35.899 10:57:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.899 10:57:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.899 ************************************ 00:05:35.899 START TEST accel_crc32c 00:05:35.899 ************************************ 00:05:35.899 10:57:33 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:35.899 10:57:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:35.899 [2024-05-15 10:57:33.114092] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:35.899 [2024-05-15 10:57:33.114178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392947 ] 00:05:35.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.158 [2024-05-15 10:57:33.187112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.158 [2024-05-15 10:57:33.265680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.158 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.159 10:57:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:37.536 10:57:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.536 00:05:37.536 real 0m1.349s 00:05:37.536 user 0m1.220s 00:05:37.536 sys 0m0.142s 00:05:37.536 10:57:34 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:37.536 10:57:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:37.536 ************************************ 00:05:37.536 END TEST accel_crc32c 00:05:37.536 ************************************ 00:05:37.536 10:57:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:37.536 10:57:34 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:37.536 10:57:34 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:37.536 10:57:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.536 ************************************ 00:05:37.536 START TEST accel_crc32c_C2 00:05:37.536 ************************************ 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:37.536 [2024-05-15 10:57:34.540707] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:37.536 [2024-05-15 10:57:34.540786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393230 ] 00:05:37.536 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.536 [2024-05-15 10:57:34.612798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.536 [2024-05-15 10:57:34.682887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.536 10:57:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.915 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.916 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:38.916 10:57:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.916 00:05:38.916 real 0m1.340s 00:05:38.916 user 0m1.212s 00:05:38.916 sys 0m0.141s 00:05:38.916 10:57:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:38.916 10:57:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:38.916 ************************************ 00:05:38.916 END TEST accel_crc32c_C2 00:05:38.916 ************************************ 00:05:38.916 10:57:35 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:38.916 10:57:35 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:38.916 10:57:35 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:38.916 10:57:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.916 ************************************ 00:05:38.916 START TEST accel_copy 00:05:38.916 ************************************ 00:05:38.916 10:57:35 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:38.916 10:57:35 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:38.916 [2024-05-15 10:57:35.949134] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:38.916 [2024-05-15 10:57:35.949213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393517 ] 00:05:38.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.916 [2024-05-15 10:57:36.019684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.916 [2024-05-15 10:57:36.090034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.916 10:57:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:40.294 10:57:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.294 00:05:40.294 real 0m1.335s 00:05:40.294 user 0m1.209s 00:05:40.294 sys 0m0.139s 00:05:40.294 10:57:37 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:40.294 10:57:37 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 ************************************ 00:05:40.294 END TEST accel_copy 00:05:40.294 ************************************ 00:05:40.294 10:57:37 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.294 10:57:37 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:40.294 10:57:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:40.294 10:57:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 ************************************ 00:05:40.294 START TEST accel_fill 00:05:40.294 ************************************ 00:05:40.294 10:57:37 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:40.294 [2024-05-15 10:57:37.348885] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:40.294 [2024-05-15 10:57:37.348968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393797 ] 00:05:40.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.294 [2024-05-15 10:57:37.418192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.294 [2024-05-15 10:57:37.488895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:40.294 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.295 10:57:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:41.677 10:57:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.677 00:05:41.677 real 0m1.339s 00:05:41.677 user 0m1.223s 00:05:41.677 sys 0m0.130s 00:05:41.677 10:57:38 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.677 10:57:38 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:41.677 ************************************ 00:05:41.677 END TEST accel_fill 00:05:41.677 ************************************ 00:05:41.677 10:57:38 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:41.677 10:57:38 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:41.677 10:57:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.677 10:57:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.677 ************************************ 00:05:41.677 START TEST accel_copy_crc32c 00:05:41.677 ************************************ 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:41.677 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:41.677 [2024-05-15 10:57:38.760906] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:41.677 [2024-05-15 10:57:38.760984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394083 ] 00:05:41.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.677 [2024-05-15 10:57:38.832190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.677 [2024-05-15 10:57:38.903647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.936 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.937 10:57:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.874 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.875 00:05:42.875 real 0m1.346s 00:05:42.875 user 0m1.228s 00:05:42.875 sys 0m0.132s 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:42.875 10:57:40 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.875 ************************************ 00:05:42.875 END TEST accel_copy_crc32c 00:05:42.875 ************************************ 00:05:42.875 10:57:40 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.875 10:57:40 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:42.875 10:57:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:42.875 10:57:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.135 ************************************ 00:05:43.135 START TEST accel_copy_crc32c_C2 00:05:43.135 ************************************ 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:43.135 [2024-05-15 10:57:40.188659] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:43.135 [2024-05-15 10:57:40.188738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394325 ] 00:05:43.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.135 [2024-05-15 10:57:40.259309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.135 [2024-05-15 10:57:40.333343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.135 10:57:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.515 00:05:44.515 real 0m1.342s 00:05:44.515 user 0m1.222s 00:05:44.515 sys 0m0.133s 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:44.515 10:57:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.515 ************************************ 00:05:44.515 END TEST accel_copy_crc32c_C2 00:05:44.515 ************************************ 00:05:44.515 10:57:41 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:44.515 10:57:41 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:44.515 10:57:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.515 10:57:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.515 ************************************ 00:05:44.515 START TEST accel_dualcast 00:05:44.515 ************************************ 00:05:44.515 10:57:41 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:44.515 10:57:41 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:44.515 [2024-05-15 10:57:41.601122] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:44.515 [2024-05-15 10:57:41.601204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394538 ] 00:05:44.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.515 [2024-05-15 10:57:41.673565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.515 [2024-05-15 10:57:41.745706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.774 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.775 10:57:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:45.713 10:57:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.713 00:05:45.713 real 0m1.343s 00:05:45.713 user 0m1.217s 00:05:45.713 sys 0m0.139s 00:05:45.713 10:57:42 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:45.713 10:57:42 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:45.713 ************************************ 00:05:45.713 END TEST accel_dualcast 00:05:45.713 ************************************ 00:05:45.713 10:57:42 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:45.713 10:57:42 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:45.713 10:57:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:45.713 10:57:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.973 ************************************ 00:05:45.973 START TEST accel_compare 00:05:45.973 ************************************ 00:05:45.973 10:57:43 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:45.973 [2024-05-15 10:57:43.028189] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:45.973 [2024-05-15 10:57:43.028270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394767 ] 00:05:45.973 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.973 [2024-05-15 10:57:43.100612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.973 [2024-05-15 10:57:43.172156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.973 10:57:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:47.353 10:57:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.353 00:05:47.353 real 0m1.340s 00:05:47.353 user 0m1.214s 00:05:47.353 sys 0m0.139s 00:05:47.353 10:57:44 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:47.353 10:57:44 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:47.353 ************************************ 00:05:47.353 END TEST accel_compare 00:05:47.353 ************************************ 00:05:47.353 10:57:44 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:47.353 10:57:44 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:47.353 10:57:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:47.353 10:57:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.353 ************************************ 00:05:47.353 START TEST accel_xor 00:05:47.353 ************************************ 00:05:47.353 10:57:44 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:47.353 10:57:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:47.353 [2024-05-15 10:57:44.461070] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:47.353 [2024-05-15 10:57:44.461151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394993 ] 00:05:47.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.353 [2024-05-15 10:57:44.532750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.353 [2024-05-15 10:57:44.609060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.613 10:57:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:48.551 10:57:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.551 00:05:48.551 real 0m1.345s 00:05:48.551 user 0m1.224s 00:05:48.551 sys 0m0.134s 00:05:48.551 10:57:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:48.551 10:57:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:48.551 ************************************ 00:05:48.551 END TEST accel_xor 00:05:48.551 ************************************ 00:05:48.811 10:57:45 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:48.811 10:57:45 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:48.811 10:57:45 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:48.811 10:57:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.811 ************************************ 00:05:48.811 START TEST accel_xor 00:05:48.811 ************************************ 00:05:48.811 10:57:45 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:48.811 10:57:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:48.811 [2024-05-15 10:57:45.898261] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:48.811 [2024-05-15 10:57:45.898348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395261 ] 00:05:48.811 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.811 [2024-05-15 10:57:45.970651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.811 [2024-05-15 10:57:46.043099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.070 10:57:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.007 10:57:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:50.008 10:57:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.008 00:05:50.008 real 0m1.341s 00:05:50.008 user 0m1.221s 00:05:50.008 sys 0m0.133s 00:05:50.008 10:57:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:50.008 10:57:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:50.008 ************************************ 00:05:50.008 END TEST accel_xor 00:05:50.008 ************************************ 00:05:50.008 10:57:47 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:50.008 10:57:47 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:50.008 10:57:47 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:50.008 10:57:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.268 ************************************ 00:05:50.268 START TEST accel_dif_verify 00:05:50.268 ************************************ 00:05:50.268 10:57:47 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:50.268 [2024-05-15 10:57:47.317921] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:50.268 [2024-05-15 10:57:47.317999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395550 ] 00:05:50.268 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.268 [2024-05-15 10:57:47.386887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.268 [2024-05-15 10:57:47.457490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.268 10:57:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:51.713 10:57:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.713 00:05:51.713 real 0m1.334s 00:05:51.713 user 0m1.226s 00:05:51.713 sys 0m0.122s 00:05:51.713 10:57:48 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.713 10:57:48 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:51.713 ************************************ 00:05:51.713 END TEST accel_dif_verify 00:05:51.713 ************************************ 00:05:51.713 10:57:48 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:51.713 10:57:48 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:51.713 10:57:48 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.713 10:57:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.713 ************************************ 00:05:51.713 START TEST accel_dif_generate 00:05:51.713 ************************************ 00:05:51.713 10:57:48 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:51.713 [2024-05-15 10:57:48.726416] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:51.713 [2024-05-15 10:57:48.726477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395832 ] 00:05:51.713 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.713 [2024-05-15 10:57:48.794437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.713 [2024-05-15 10:57:48.865368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.713 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.714 10:57:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:53.096 10:57:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.096 00:05:53.096 real 0m1.334s 00:05:53.096 user 0m1.214s 00:05:53.096 sys 0m0.132s 00:05:53.096 10:57:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.096 10:57:50 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:53.096 ************************************ 00:05:53.096 END TEST accel_dif_generate 00:05:53.096 ************************************ 00:05:53.096 10:57:50 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:53.096 10:57:50 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:53.096 10:57:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.096 10:57:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.096 ************************************ 00:05:53.096 START TEST accel_dif_generate_copy 00:05:53.096 ************************************ 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:53.096 [2024-05-15 10:57:50.159010] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:53.096 [2024-05-15 10:57:50.159086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396125 ] 00:05:53.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.096 [2024-05-15 10:57:50.230155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.096 [2024-05-15 10:57:50.303288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.096 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.097 10:57:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.477 00:05:54.477 real 0m1.341s 00:05:54.477 user 0m1.229s 00:05:54.477 sys 0m0.125s 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:54.477 10:57:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:54.477 ************************************ 00:05:54.477 END TEST accel_dif_generate_copy 00:05:54.477 ************************************ 00:05:54.477 10:57:51 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:54.477 10:57:51 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:54.477 10:57:51 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:54.477 10:57:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:54.477 10:57:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.477 ************************************ 00:05:54.477 START TEST accel_comp 00:05:54.477 ************************************ 00:05:54.477 10:57:51 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:54.477 10:57:51 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.478 10:57:51 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.478 10:57:51 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.478 10:57:51 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.478 10:57:51 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.478 10:57:51 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:54.478 10:57:51 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:54.478 [2024-05-15 10:57:51.567974] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:54.478 [2024-05-15 10:57:51.568024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396404 ] 00:05:54.478 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.478 [2024-05-15 10:57:51.629847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.478 [2024-05-15 10:57:51.700810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.737 10:57:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:55.675 10:57:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.675 00:05:55.675 real 0m1.321s 00:05:55.675 user 0m1.218s 00:05:55.675 sys 0m0.118s 00:05:55.675 10:57:52 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.675 10:57:52 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:55.675 ************************************ 00:05:55.675 END TEST accel_comp 00:05:55.675 ************************************ 00:05:55.675 10:57:52 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:55.675 10:57:52 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:55.675 10:57:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:55.675 10:57:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.935 ************************************ 00:05:55.935 START TEST accel_decomp 00:05:55.935 ************************************ 00:05:55.935 10:57:52 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:55.935 10:57:52 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:55.935 [2024-05-15 10:57:52.966939] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:55.935 [2024-05-15 10:57:52.967018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396685 ] 00:05:55.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.935 [2024-05-15 10:57:53.037045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.935 [2024-05-15 10:57:53.107670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.935 10:57:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.313 10:57:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.313 10:57:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.313 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.314 10:57:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.314 00:05:57.314 real 0m1.338s 00:05:57.314 user 0m1.230s 00:05:57.314 sys 0m0.122s 00:05:57.314 10:57:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:57.314 10:57:54 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:57.314 ************************************ 00:05:57.314 END TEST accel_decomp 00:05:57.314 ************************************ 00:05:57.314 10:57:54 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.314 10:57:54 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:05:57.314 10:57:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:57.314 10:57:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.314 ************************************ 00:05:57.314 START TEST accel_decmop_full 00:05:57.314 ************************************ 00:05:57.314 10:57:54 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:57.314 [2024-05-15 10:57:54.372813] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:57.314 [2024-05-15 10:57:54.372866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396970 ] 00:05:57.314 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.314 [2024-05-15 10:57:54.437952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.314 [2024-05-15 10:57:54.507997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.314 10:57:54 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.694 10:57:55 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.694 00:05:58.694 real 0m1.331s 00:05:58.694 user 0m1.216s 00:05:58.694 sys 0m0.127s 00:05:58.694 10:57:55 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:58.694 10:57:55 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:58.694 ************************************ 00:05:58.694 END TEST accel_decmop_full 00:05:58.694 ************************************ 00:05:58.694 10:57:55 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.694 10:57:55 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:05:58.694 10:57:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:58.694 10:57:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.694 ************************************ 00:05:58.694 START TEST accel_decomp_mcore 00:05:58.694 ************************************ 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:58.694 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:58.694 [2024-05-15 10:57:55.790044] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:05:58.694 [2024-05-15 10:57:55.790127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397256 ] 00:05:58.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.694 [2024-05-15 10:57:55.858247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.694 [2024-05-15 10:57:55.932138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.694 [2024-05-15 10:57:55.932235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.694 [2024-05-15 10:57:55.932321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.694 [2024-05-15 10:57:55.932322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.954 10:57:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.891 00:05:59.891 real 0m1.351s 00:05:59.891 user 0m4.570s 00:05:59.891 sys 0m0.130s 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:59.891 10:57:57 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:59.891 ************************************ 00:05:59.891 END TEST accel_decomp_mcore 00:05:59.891 ************************************ 00:05:59.891 10:57:57 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:59.891 10:57:57 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:59.891 10:57:57 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:59.891 10:57:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.150 ************************************ 00:06:00.150 START TEST accel_decomp_full_mcore 00:06:00.150 ************************************ 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:00.150 [2024-05-15 10:57:57.206228] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:00.150 [2024-05-15 10:57:57.206322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397484 ] 00:06:00.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.150 [2024-05-15 10:57:57.275625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.150 [2024-05-15 10:57:57.349284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.150 [2024-05-15 10:57:57.349388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.150 [2024-05-15 10:57:57.349466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.150 [2024-05-15 10:57:57.349468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.150 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.151 10:57:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.530 00:06:01.530 real 0m1.363s 00:06:01.530 user 0m4.602s 00:06:01.530 sys 0m0.133s 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.530 10:57:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:01.530 ************************************ 00:06:01.530 END TEST accel_decomp_full_mcore 00:06:01.530 ************************************ 00:06:01.530 10:57:58 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.530 10:57:58 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:01.530 10:57:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.530 10:57:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.530 ************************************ 00:06:01.530 START TEST accel_decomp_mthread 00:06:01.530 ************************************ 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:01.530 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:01.530 [2024-05-15 10:57:58.649348] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:01.530 [2024-05-15 10:57:58.649436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397718 ] 00:06:01.530 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.530 [2024-05-15 10:57:58.719838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.789 [2024-05-15 10:57:58.795720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 10:57:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.726 00:06:02.726 real 0m1.349s 00:06:02.726 user 0m1.221s 00:06:02.726 sys 0m0.143s 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:02.726 10:57:59 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:02.726 ************************************ 00:06:02.726 END TEST accel_decomp_mthread 00:06:02.726 ************************************ 00:06:02.985 10:58:00 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:02.985 10:58:00 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:02.985 10:58:00 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:02.985 10:58:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.985 ************************************ 00:06:02.985 START TEST accel_decomp_full_mthread 00:06:02.985 ************************************ 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:02.985 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:02.985 [2024-05-15 10:58:00.092620] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:02.985 [2024-05-15 10:58:00.092702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397949 ] 00:06:02.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.985 [2024-05-15 10:58:00.166065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.985 [2024-05-15 10:58:00.241953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.244 10:58:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.181 00:06:04.181 real 0m1.373s 00:06:04.181 user 0m1.252s 00:06:04.181 sys 0m0.135s 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:04.181 10:58:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:04.181 ************************************ 00:06:04.181 END TEST accel_decomp_full_mthread 00:06:04.181 ************************************ 00:06:04.440 10:58:01 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:04.440 10:58:01 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:04.440 10:58:01 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:04.440 10:58:01 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:04.440 10:58:01 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:04.440 10:58:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.440 10:58:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.440 10:58:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.440 10:58:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.440 10:58:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.440 10:58:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.440 10:58:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:04.440 10:58:01 accel -- accel/accel.sh@41 -- # jq -r . 00:06:04.440 ************************************ 00:06:04.440 START TEST accel_dif_functional_tests 00:06:04.440 ************************************ 00:06:04.440 10:58:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:04.440 [2024-05-15 10:58:01.556239] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:04.440 [2024-05-15 10:58:01.556318] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398195 ] 00:06:04.440 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.440 [2024-05-15 10:58:01.624616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.440 [2024-05-15 10:58:01.698512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.440 [2024-05-15 10:58:01.698607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.440 [2024-05-15 10:58:01.698609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.699 00:06:04.699 00:06:04.699 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.699 http://cunit.sourceforge.net/ 00:06:04.699 00:06:04.699 00:06:04.699 Suite: accel_dif 00:06:04.699 Test: verify: DIF generated, GUARD check ...passed 00:06:04.699 Test: verify: DIF generated, APPTAG check ...passed 00:06:04.699 Test: verify: DIF generated, REFTAG check ...passed 00:06:04.699 Test: verify: DIF not generated, GUARD check ...[2024-05-15 10:58:01.767263] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:04.699 [2024-05-15 10:58:01.767313] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:04.699 passed 00:06:04.699 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 10:58:01.767363] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:04.699 [2024-05-15 10:58:01.767386] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:04.699 passed 00:06:04.699 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 10:58:01.767407] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:04.699 [2024-05-15 10:58:01.767427] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:04.699 passed 00:06:04.699 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:04.699 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 10:58:01.767474] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:04.699 passed 00:06:04.699 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:04.699 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:04.699 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:04.699 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 10:58:01.767577] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:04.699 passed 00:06:04.699 Test: generate copy: DIF generated, GUARD check ...passed 00:06:04.699 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:04.699 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:04.699 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:04.699 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:04.699 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:04.699 Test: generate copy: iovecs-len validate ...[2024-05-15 10:58:01.767748] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:04.699 passed 00:06:04.699 Test: generate copy: buffer alignment validate ...passed 00:06:04.699 00:06:04.700 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.700 suites 1 1 n/a 0 0 00:06:04.700 tests 20 20 20 0 0 00:06:04.700 asserts 204 204 204 0 n/a 00:06:04.700 00:06:04.700 Elapsed time = 0.002 seconds 00:06:04.700 00:06:04.700 real 0m0.397s 00:06:04.700 user 0m0.562s 00:06:04.700 sys 0m0.150s 00:06:04.700 10:58:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:04.700 10:58:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:04.700 ************************************ 00:06:04.700 END TEST accel_dif_functional_tests 00:06:04.700 ************************************ 00:06:04.958 00:06:04.958 real 0m31.482s 00:06:04.958 user 0m34.513s 00:06:04.958 sys 0m4.981s 00:06:04.958 10:58:01 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:04.958 10:58:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.958 ************************************ 00:06:04.959 END TEST accel 00:06:04.959 ************************************ 00:06:04.959 10:58:02 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:04.959 10:58:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:04.959 10:58:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:04.959 10:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:04.959 ************************************ 00:06:04.959 START TEST accel_rpc 00:06:04.959 ************************************ 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:04.959 * Looking for test storage... 00:06:04.959 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:04.959 10:58:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.959 10:58:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1398468 00:06:04.959 10:58:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1398468 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 1398468 ']' 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.959 10:58:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:04.959 10:58:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.959 [2024-05-15 10:58:02.188170] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:04.959 [2024-05-15 10:58:02.188221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398468 ] 00:06:04.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.218 [2024-05-15 10:58:02.254789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.218 [2024-05-15 10:58:02.337894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.786 10:58:03 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:05.786 10:58:03 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:05.786 10:58:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:05.786 10:58:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:05.786 10:58:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:05.786 10:58:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:05.786 10:58:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:05.786 10:58:03 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:05.786 10:58:03 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.786 10:58:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 ************************************ 00:06:05.786 START TEST accel_assign_opcode 00:06:05.786 ************************************ 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 [2024-05-15 10:58:03.035972] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 [2024-05-15 10:58:03.043980] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:05.786 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:06.053 software 00:06:06.053 00:06:06.053 real 0m0.228s 00:06:06.054 user 0m0.039s 00:06:06.054 sys 0m0.015s 00:06:06.054 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:06.054 10:58:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.054 ************************************ 00:06:06.054 END TEST accel_assign_opcode 00:06:06.054 ************************************ 00:06:06.054 10:58:03 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1398468 00:06:06.054 10:58:03 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 1398468 ']' 00:06:06.054 10:58:03 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 1398468 00:06:06.054 10:58:03 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:06.054 10:58:03 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:06.054 10:58:03 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1398468 00:06:06.313 10:58:03 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:06.313 10:58:03 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:06.313 10:58:03 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1398468' 00:06:06.313 killing process with pid 1398468 00:06:06.313 10:58:03 accel_rpc -- common/autotest_common.sh@966 -- # kill 1398468 00:06:06.313 10:58:03 accel_rpc -- common/autotest_common.sh@971 -- # wait 1398468 00:06:06.572 00:06:06.572 real 0m1.588s 00:06:06.572 user 0m1.612s 00:06:06.572 sys 0m0.467s 00:06:06.572 10:58:03 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:06.572 10:58:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.572 ************************************ 00:06:06.572 END TEST accel_rpc 00:06:06.572 ************************************ 00:06:06.572 10:58:03 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:06.572 10:58:03 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:06.572 10:58:03 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:06.572 10:58:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.572 ************************************ 00:06:06.572 START TEST app_cmdline 00:06:06.572 ************************************ 00:06:06.572 10:58:03 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:06.830 * Looking for test storage... 00:06:06.830 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:06.830 10:58:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.830 10:58:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1398817 00:06:06.830 10:58:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1398817 00:06:06.830 10:58:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.830 10:58:03 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 1398817 ']' 00:06:06.830 10:58:03 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.830 10:58:03 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:06.830 10:58:03 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.830 10:58:03 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:06.830 10:58:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.830 [2024-05-15 10:58:03.875553] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:06.830 [2024-05-15 10:58:03.875648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398817 ] 00:06:06.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.830 [2024-05-15 10:58:03.944777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.830 [2024-05-15 10:58:04.018518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:07.765 { 00:06:07.765 "version": "SPDK v24.05-pre git sha1 01f10b8a3", 00:06:07.765 "fields": { 00:06:07.765 "major": 24, 00:06:07.765 "minor": 5, 00:06:07.765 "patch": 0, 00:06:07.765 "suffix": "-pre", 00:06:07.765 "commit": "01f10b8a3" 00:06:07.765 } 00:06:07.765 } 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.765 10:58:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:07.765 10:58:04 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.024 request: 00:06:08.024 { 00:06:08.024 "method": "env_dpdk_get_mem_stats", 00:06:08.024 "req_id": 1 00:06:08.024 } 00:06:08.024 Got JSON-RPC error response 00:06:08.024 response: 00:06:08.024 { 00:06:08.024 "code": -32601, 00:06:08.024 "message": "Method not found" 00:06:08.024 } 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:08.024 10:58:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1398817 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 1398817 ']' 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 1398817 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1398817 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1398817' 00:06:08.024 killing process with pid 1398817 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@966 -- # kill 1398817 00:06:08.024 10:58:05 app_cmdline -- common/autotest_common.sh@971 -- # wait 1398817 00:06:08.284 00:06:08.284 real 0m1.689s 00:06:08.284 user 0m1.970s 00:06:08.284 sys 0m0.479s 00:06:08.284 10:58:05 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:08.284 10:58:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.284 ************************************ 00:06:08.284 END TEST app_cmdline 00:06:08.284 ************************************ 00:06:08.284 10:58:05 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:08.284 10:58:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:08.284 10:58:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:08.284 10:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.284 ************************************ 00:06:08.284 START TEST version 00:06:08.284 ************************************ 00:06:08.284 10:58:05 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:08.543 * Looking for test storage... 00:06:08.543 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:08.543 10:58:05 version -- app/version.sh@17 -- # get_header_version major 00:06:08.543 10:58:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # cut -f2 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.543 10:58:05 version -- app/version.sh@17 -- # major=24 00:06:08.543 10:58:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.543 10:58:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # cut -f2 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.543 10:58:05 version -- app/version.sh@18 -- # minor=5 00:06:08.543 10:58:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.543 10:58:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # cut -f2 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.543 10:58:05 version -- app/version.sh@19 -- # patch=0 00:06:08.543 10:58:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.543 10:58:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # cut -f2 00:06:08.543 10:58:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.543 10:58:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.543 10:58:05 version -- app/version.sh@22 -- # version=24.5 00:06:08.543 10:58:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.543 10:58:05 version -- app/version.sh@28 -- # version=24.5rc0 00:06:08.544 10:58:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:08.544 10:58:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.544 10:58:05 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:08.544 10:58:05 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:08.544 00:06:08.544 real 0m0.179s 00:06:08.544 user 0m0.103s 00:06:08.544 sys 0m0.123s 00:06:08.544 10:58:05 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:08.544 10:58:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.544 ************************************ 00:06:08.544 END TEST version 00:06:08.544 ************************************ 00:06:08.544 10:58:05 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.544 10:58:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.544 10:58:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.544 10:58:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.544 10:58:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:08.544 10:58:05 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:08.544 10:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.544 10:58:05 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:08.544 10:58:05 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:06:08.544 10:58:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:08.544 10:58:05 -- spdk/autotest.sh@367 -- # [[ 1 -eq 1 ]] 00:06:08.544 10:58:05 -- spdk/autotest.sh@368 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:08.544 10:58:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:08.544 10:58:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:08.544 10:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.803 ************************************ 00:06:08.803 START TEST llvm_fuzz 00:06:08.803 ************************************ 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:08.803 * Looking for test storage... 00:06:08.803 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@547 -- # fuzzers=() 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@547 -- # local fuzzers 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@549 -- # [[ -n '' ]] 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@556 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:08.803 10:58:05 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:08.803 10:58:05 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:08.803 ************************************ 00:06:08.803 START TEST nvmf_fuzz 00:06:08.803 ************************************ 00:06:08.803 10:58:05 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:09.065 * Looking for test storage... 00:06:09.065 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:09.065 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:09.066 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:09.066 #define SPDK_CONFIG_H 00:06:09.066 #define SPDK_CONFIG_APPS 1 00:06:09.066 #define SPDK_CONFIG_ARCH native 00:06:09.066 #undef SPDK_CONFIG_ASAN 00:06:09.066 #undef SPDK_CONFIG_AVAHI 00:06:09.066 #undef SPDK_CONFIG_CET 00:06:09.066 #define SPDK_CONFIG_COVERAGE 1 00:06:09.066 #define SPDK_CONFIG_CROSS_PREFIX 00:06:09.066 #undef SPDK_CONFIG_CRYPTO 00:06:09.066 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:09.066 #undef SPDK_CONFIG_CUSTOMOCF 00:06:09.066 #undef SPDK_CONFIG_DAOS 00:06:09.066 #define SPDK_CONFIG_DAOS_DIR 00:06:09.066 #define SPDK_CONFIG_DEBUG 1 00:06:09.066 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:09.066 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:09.066 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:09.066 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:09.066 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:09.066 #undef SPDK_CONFIG_DPDK_UADK 00:06:09.066 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:09.066 #define SPDK_CONFIG_EXAMPLES 1 00:06:09.066 #undef SPDK_CONFIG_FC 00:06:09.066 #define SPDK_CONFIG_FC_PATH 00:06:09.066 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:09.066 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:09.066 #undef SPDK_CONFIG_FUSE 00:06:09.066 #define SPDK_CONFIG_FUZZER 1 00:06:09.066 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:09.066 #undef SPDK_CONFIG_GOLANG 00:06:09.066 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:09.066 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:09.066 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:09.066 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:09.066 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:09.066 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:09.066 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:09.066 #define SPDK_CONFIG_IDXD 1 00:06:09.066 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:09.066 #undef SPDK_CONFIG_IPSEC_MB 00:06:09.066 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:09.066 #define SPDK_CONFIG_ISAL 1 00:06:09.066 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:09.066 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:09.066 #define SPDK_CONFIG_LIBDIR 00:06:09.066 #undef SPDK_CONFIG_LTO 00:06:09.066 #define SPDK_CONFIG_MAX_LCORES 00:06:09.066 #define SPDK_CONFIG_NVME_CUSE 1 00:06:09.066 #undef SPDK_CONFIG_OCF 00:06:09.066 #define SPDK_CONFIG_OCF_PATH 00:06:09.066 #define SPDK_CONFIG_OPENSSL_PATH 00:06:09.066 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:09.066 #define SPDK_CONFIG_PGO_DIR 00:06:09.066 #undef SPDK_CONFIG_PGO_USE 00:06:09.066 #define SPDK_CONFIG_PREFIX /usr/local 00:06:09.066 #undef SPDK_CONFIG_RAID5F 00:06:09.066 #undef SPDK_CONFIG_RBD 00:06:09.066 #define SPDK_CONFIG_RDMA 1 00:06:09.066 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:09.066 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:09.066 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:09.066 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:09.066 #undef SPDK_CONFIG_SHARED 00:06:09.066 #undef SPDK_CONFIG_SMA 00:06:09.066 #define SPDK_CONFIG_TESTS 1 00:06:09.066 #undef SPDK_CONFIG_TSAN 00:06:09.066 #define SPDK_CONFIG_UBLK 1 00:06:09.066 #define SPDK_CONFIG_UBSAN 1 00:06:09.066 #undef SPDK_CONFIG_UNIT_TESTS 00:06:09.066 #undef SPDK_CONFIG_URING 00:06:09.066 #define SPDK_CONFIG_URING_PATH 00:06:09.066 #undef SPDK_CONFIG_URING_ZNS 00:06:09.066 #undef SPDK_CONFIG_USDT 00:06:09.066 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:09.066 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:09.066 #define SPDK_CONFIG_VFIO_USER 1 00:06:09.066 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:09.066 #define SPDK_CONFIG_VHOST 1 00:06:09.066 #define SPDK_CONFIG_VIRTIO 1 00:06:09.066 #undef SPDK_CONFIG_VTUNE 00:06:09.066 #define SPDK_CONFIG_VTUNE_DIR 00:06:09.066 #define SPDK_CONFIG_WERROR 1 00:06:09.066 #define SPDK_CONFIG_WPDK_DIR 00:06:09.066 #undef SPDK_CONFIG_XNVME 00:06:09.067 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # : 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:09.067 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # : 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # : 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # : 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # : 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:09.068 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # [[ -z 1399252 ]] 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # kill -0 1399252 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.EcpLmy 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.EcpLmy/tests/nvmf /tmp/spdk.EcpLmy 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=968024064 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4316405760 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=52296830976 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=9445474304 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866440192 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342489088 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5971968 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869565440 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1589248 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:09.069 * Looking for test storage... 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:09.069 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # target_space=52296830976 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # new_size=11660066816 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:09.070 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # set -o errtrace 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1684 -- # true 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1686 -- # xtrace_fd 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:09.070 10:58:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:09.070 [2024-05-15 10:58:06.314285] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:09.070 [2024-05-15 10:58:06.314368] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399391 ] 00:06:09.329 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.329 [2024-05-15 10:58:06.566202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.588 [2024-05-15 10:58:06.653168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.589 [2024-05-15 10:58:06.712205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.589 [2024-05-15 10:58:06.728158] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:09.589 [2024-05-15 10:58:06.728573] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:09.589 INFO: Running with entropic power schedule (0xFF, 100). 00:06:09.589 INFO: Seed: 4194883324 00:06:09.589 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:09.589 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:09.589 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:09.589 INFO: A corpus is not provided, starting from an empty corpus 00:06:09.589 #2 INITED exec/s: 0 rss: 64Mb 00:06:09.589 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:09.589 This may also happen if the target rejected all inputs we tried so far 00:06:09.589 [2024-05-15 10:58:06.783820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:09.589 [2024-05-15 10:58:06.783853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.849 NEW_FUNC[1/685]: 0x481d20 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:09.849 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:09.849 #10 NEW cov: 11775 ft: 11775 corp: 2/93b lim: 320 exec/s: 0 rss: 70Mb L: 92/92 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:09.849 [2024-05-15 10:58:07.094624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:09.849 [2024-05-15 10:58:07.094663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.109 #12 NEW cov: 11905 ft: 12366 corp: 3/181b lim: 320 exec/s: 0 rss: 70Mb L: 88/92 MS: 2 EraseBytes-CopyPart- 00:06:10.109 [2024-05-15 10:58:07.144633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.109 [2024-05-15 10:58:07.144662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.109 #13 NEW cov: 11911 ft: 12680 corp: 4/273b lim: 320 exec/s: 0 rss: 70Mb L: 92/92 MS: 1 CopyPart- 00:06:10.109 [2024-05-15 10:58:07.184858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.109 [2024-05-15 10:58:07.184884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.109 [2024-05-15 10:58:07.184962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.109 [2024-05-15 10:58:07.184977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.109 #19 NEW cov: 11996 ft: 13222 corp: 5/461b lim: 320 exec/s: 0 rss: 71Mb L: 188/188 MS: 1 InsertRepeatedBytes- 00:06:10.109 [2024-05-15 10:58:07.234923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.109 [2024-05-15 10:58:07.234947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.109 #20 NEW cov: 11996 ft: 13305 corp: 6/577b lim: 320 exec/s: 0 rss: 71Mb L: 116/188 MS: 1 InsertRepeatedBytes- 00:06:10.109 [2024-05-15 10:58:07.285131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:21bababa cdw10:21212121 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x2121212121212121 00:06:10.109 [2024-05-15 10:58:07.285156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.109 [2024-05-15 10:58:07.285231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:5 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.109 [2024-05-15 10:58:07.285246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.109 #21 NEW cov: 11996 ft: 13389 corp: 7/706b lim: 320 exec/s: 0 rss: 71Mb L: 129/188 MS: 1 InsertRepeatedBytes- 00:06:10.109 [2024-05-15 10:58:07.325265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.109 [2024-05-15 10:58:07.325289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.109 [2024-05-15 10:58:07.325372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:50525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.109 [2024-05-15 10:58:07.325393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.109 #22 NEW cov: 11996 ft: 13432 corp: 8/894b lim: 320 exec/s: 0 rss: 71Mb L: 188/188 MS: 1 ChangeBit- 00:06:10.368 [2024-05-15 10:58:07.375465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:21bababa cdw10:21212121 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x2121212121212121 00:06:10.368 [2024-05-15 10:58:07.375491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.368 [2024-05-15 10:58:07.375556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:5 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.368 [2024-05-15 10:58:07.375570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.368 #23 NEW cov: 11996 ft: 13534 corp: 9/1023b lim: 320 exec/s: 0 rss: 71Mb L: 129/188 MS: 1 ShuffleBytes- 00:06:10.368 [2024-05-15 10:58:07.425546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:525252ff cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.368 [2024-05-15 10:58:07.425572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.368 [2024-05-15 10:58:07.425636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:50525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.368 [2024-05-15 10:58:07.425649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.368 #24 NEW cov: 11996 ft: 13592 corp: 10/1211b lim: 320 exec/s: 0 rss: 71Mb L: 188/188 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:10.368 [2024-05-15 10:58:07.475700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:21bababa cdw10:21212121 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x2121212121212121 00:06:10.368 [2024-05-15 10:58:07.475725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.368 [2024-05-15 10:58:07.475803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:5 nsid:babababa cdw10:babababa cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.368 [2024-05-15 10:58:07.475817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.368 #25 NEW cov: 11996 ft: 13613 corp: 11/1344b lim: 320 exec/s: 0 rss: 71Mb L: 133/188 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:10.368 [2024-05-15 10:58:07.515649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.369 [2024-05-15 10:58:07.515674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.369 #26 NEW cov: 11996 ft: 13662 corp: 12/1432b lim: 320 exec/s: 0 rss: 71Mb L: 88/188 MS: 1 ChangeBinInt- 00:06:10.369 [2024-05-15 10:58:07.555927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.369 [2024-05-15 10:58:07.555953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.369 [2024-05-15 10:58:07.556029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.369 [2024-05-15 10:58:07.556046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.369 #27 NEW cov: 11996 ft: 13699 corp: 13/1620b lim: 320 exec/s: 0 rss: 71Mb L: 188/188 MS: 1 ChangeBit- 00:06:10.369 [2024-05-15 10:58:07.595898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.369 [2024-05-15 10:58:07.595924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.369 #28 NEW cov: 11996 ft: 13709 corp: 14/1708b lim: 320 exec/s: 0 rss: 71Mb L: 88/188 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:10.628 [2024-05-15 10:58:07.636058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.628 [2024-05-15 10:58:07.636084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.628 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:10.628 #29 NEW cov: 12019 ft: 13796 corp: 15/1824b lim: 320 exec/s: 0 rss: 72Mb L: 116/188 MS: 1 ShuffleBytes- 00:06:10.628 [2024-05-15 10:58:07.686139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.628 [2024-05-15 10:58:07.686165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.628 #30 NEW cov: 12019 ft: 13834 corp: 16/1916b lim: 320 exec/s: 0 rss: 72Mb L: 92/188 MS: 1 ShuffleBytes- 00:06:10.628 [2024-05-15 10:58:07.726418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:525252ff cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.628 [2024-05-15 10:58:07.726444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.628 [2024-05-15 10:58:07.726505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:50525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.628 [2024-05-15 10:58:07.726519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.628 #31 NEW cov: 12019 ft: 13848 corp: 17/2104b lim: 320 exec/s: 0 rss: 72Mb L: 188/188 MS: 1 ChangeByte- 00:06:10.628 [2024-05-15 10:58:07.776421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.628 [2024-05-15 10:58:07.776448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.628 #32 NEW cov: 12019 ft: 13875 corp: 18/2196b lim: 320 exec/s: 32 rss: 72Mb L: 92/188 MS: 1 ChangeBinInt- 00:06:10.628 [2024-05-15 10:58:07.826551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.628 [2024-05-15 10:58:07.826577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.628 #33 NEW cov: 12019 ft: 13887 corp: 19/2288b lim: 320 exec/s: 33 rss: 72Mb L: 92/188 MS: 1 ChangeByte- 00:06:10.628 [2024-05-15 10:58:07.866799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.628 [2024-05-15 10:58:07.866825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.628 [2024-05-15 10:58:07.866889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:50525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.628 [2024-05-15 10:58:07.866906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.628 #34 NEW cov: 12019 ft: 13891 corp: 20/2441b lim: 320 exec/s: 34 rss: 72Mb L: 153/188 MS: 1 CrossOver- 00:06:10.887 [2024-05-15 10:58:07.906761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (40) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa 00:06:10.887 [2024-05-15 10:58:07.906786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.887 #35 NEW cov: 12020 ft: 13921 corp: 21/2534b lim: 320 exec/s: 35 rss: 72Mb L: 93/188 MS: 1 InsertByte- 00:06:10.887 [2024-05-15 10:58:07.947093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.887 [2024-05-15 10:58:07.947118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.887 [2024-05-15 10:58:07.947183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:5 nsid:4b4b4b4b cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:06:10.887 [2024-05-15 10:58:07.947196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.887 [2024-05-15 10:58:07.947260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:6 nsid:4b4b4b4b cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:06:10.887 [2024-05-15 10:58:07.947273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.887 NEW_FUNC[1/1]: 0x1338910 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2038 00:06:10.888 #36 NEW cov: 12051 ft: 14133 corp: 22/2738b lim: 320 exec/s: 36 rss: 72Mb L: 204/204 MS: 1 InsertRepeatedBytes- 00:06:10.888 [2024-05-15 10:58:07.997043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:52525252 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababa5252 00:06:10.888 [2024-05-15 10:58:07.997069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.888 #37 NEW cov: 12051 ft: 14149 corp: 23/2854b lim: 320 exec/s: 37 rss: 72Mb L: 116/204 MS: 1 CrossOver- 00:06:10.888 [2024-05-15 10:58:08.037122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:10.888 [2024-05-15 10:58:08.037147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.888 #38 NEW cov: 12051 ft: 14160 corp: 24/2962b lim: 320 exec/s: 38 rss: 72Mb L: 108/204 MS: 1 CopyPart- 00:06:10.888 [2024-05-15 10:58:08.077385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.888 [2024-05-15 10:58:08.077409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.888 [2024-05-15 10:58:08.077471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:50525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.888 [2024-05-15 10:58:08.077485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.888 #39 NEW cov: 12051 ft: 14170 corp: 25/3150b lim: 320 exec/s: 39 rss: 72Mb L: 188/204 MS: 1 ChangeByte- 00:06:10.888 [2024-05-15 10:58:08.117697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:10.888 [2024-05-15 10:58:08.117728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.888 [2024-05-15 10:58:08.117808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (45) qid:0 cid:5 nsid:45454545 cdw10:45454545 cdw11:45454545 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.888 [2024-05-15 10:58:08.117823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.888 [2024-05-15 10:58:08.117887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (45) qid:0 cid:6 nsid:45454545 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:10.888 [2024-05-15 10:58:08.117900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.888 NEW_FUNC[1/1]: 0x175ed90 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:10.888 #40 NEW cov: 12064 ft: 14597 corp: 26/3365b lim: 320 exec/s: 40 rss: 72Mb L: 215/215 MS: 1 InsertRepeatedBytes- 00:06:11.147 [2024-05-15 10:58:08.167721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.147 [2024-05-15 10:58:08.167746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.147 [2024-05-15 10:58:08.167811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:5 nsid:4b4b4b4b cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:06:11.147 [2024-05-15 10:58:08.167825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.147 [2024-05-15 10:58:08.167889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:6 nsid:4b4b4b4b cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b514b 00:06:11.147 [2024-05-15 10:58:08.167903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.147 #41 NEW cov: 12064 ft: 14612 corp: 27/3569b lim: 320 exec/s: 41 rss: 72Mb L: 204/215 MS: 1 ChangeBinInt- 00:06:11.147 [2024-05-15 10:58:08.217692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:52525252 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababa5252 00:06:11.147 [2024-05-15 10:58:08.217718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.147 #42 NEW cov: 12064 ft: 14618 corp: 28/3689b lim: 320 exec/s: 42 rss: 72Mb L: 120/215 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:11.147 [2024-05-15 10:58:08.267886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.147 [2024-05-15 10:58:08.267911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.147 #43 NEW cov: 12064 ft: 14620 corp: 29/3797b lim: 320 exec/s: 43 rss: 73Mb L: 108/215 MS: 1 ChangeBinInt- 00:06:11.147 [2024-05-15 10:58:08.318182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.147 [2024-05-15 10:58:08.318207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.148 [2024-05-15 10:58:08.318271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:5 nsid:4b4b4b4b cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:06:11.148 [2024-05-15 10:58:08.318286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.148 [2024-05-15 10:58:08.318352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:6 nsid:4b4b4b4b cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4bb2b4b4b4 00:06:11.148 [2024-05-15 10:58:08.318365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.148 #44 NEW cov: 12064 ft: 14626 corp: 30/4001b lim: 320 exec/s: 44 rss: 73Mb L: 204/215 MS: 1 ChangeBinInt- 00:06:11.148 [2024-05-15 10:58:08.358253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:11.148 [2024-05-15 10:58:08.358278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.148 [2024-05-15 10:58:08.358354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:5 nsid:babababa cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2424242424242424 00:06:11.148 [2024-05-15 10:58:08.358368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.148 #45 NEW cov: 12064 ft: 14635 corp: 31/4176b lim: 320 exec/s: 45 rss: 73Mb L: 175/215 MS: 1 InsertRepeatedBytes- 00:06:11.148 [2024-05-15 10:58:08.398232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (65) qid:0 cid:4 nsid:52525252 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.148 [2024-05-15 10:58:08.398257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.407 #46 NEW cov: 12064 ft: 14979 corp: 32/4293b lim: 320 exec/s: 46 rss: 73Mb L: 117/215 MS: 1 InsertByte- 00:06:11.407 [2024-05-15 10:58:08.438374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.407 [2024-05-15 10:58:08.438405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.407 #47 NEW cov: 12064 ft: 14991 corp: 33/4386b lim: 320 exec/s: 47 rss: 73Mb L: 93/215 MS: 1 InsertByte- 00:06:11.407 [2024-05-15 10:58:08.478641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.407 [2024-05-15 10:58:08.478665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.407 [2024-05-15 10:58:08.478729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:5 nsid:4b4b4b4b cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:06:11.407 [2024-05-15 10:58:08.478742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.407 [2024-05-15 10:58:08.478804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:6 nsid:4b4b4b4b cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4bb2b4b4b4 00:06:11.407 [2024-05-15 10:58:08.478817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.407 #48 NEW cov: 12064 ft: 15009 corp: 34/4590b lim: 320 exec/s: 48 rss: 73Mb L: 204/215 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:11.407 [2024-05-15 10:58:08.528962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:52525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:11.407 [2024-05-15 10:58:08.528987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.407 [2024-05-15 10:58:08.529063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:50525252 cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:11.407 [2024-05-15 10:58:08.529080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.407 [2024-05-15 10:58:08.529140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:6 nsid:babababa cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:11.407 [2024-05-15 10:58:08.529153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.407 [2024-05-15 10:58:08.529212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:11.407 [2024-05-15 10:58:08.529226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.407 #49 NEW cov: 12064 ft: 15215 corp: 35/4887b lim: 320 exec/s: 49 rss: 73Mb L: 297/297 MS: 1 InsertRepeatedBytes- 00:06:11.407 [2024-05-15 10:58:08.568770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:525252ff cdw11:52525252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:11.407 [2024-05-15 10:58:08.568795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.407 [2024-05-15 10:58:08.568870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (52) qid:0 cid:5 nsid:52525252 cdw10:bababa52 cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x5252525252525252 00:06:11.407 [2024-05-15 10:58:08.568885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.407 #50 NEW cov: 12064 ft: 15226 corp: 36/5075b lim: 320 exec/s: 50 rss: 73Mb L: 188/297 MS: 1 CopyPart- 00:06:11.407 [2024-05-15 10:58:08.608798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.407 [2024-05-15 10:58:08.608823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.407 #51 NEW cov: 12064 ft: 15276 corp: 37/5183b lim: 320 exec/s: 51 rss: 73Mb L: 108/297 MS: 1 CopyPart- 00:06:11.407 [2024-05-15 10:58:08.658939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:baba5252 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.407 [2024-05-15 10:58:08.658964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.668 #52 NEW cov: 12064 ft: 15304 corp: 38/5251b lim: 320 exec/s: 52 rss: 73Mb L: 68/297 MS: 1 EraseBytes- 00:06:11.668 [2024-05-15 10:58:08.699107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.668 [2024-05-15 10:58:08.699132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.668 #53 NEW cov: 12064 ft: 15326 corp: 39/5339b lim: 320 exec/s: 53 rss: 73Mb L: 88/297 MS: 1 ChangeByte- 00:06:11.668 [2024-05-15 10:58:08.749366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ba) qid:0 cid:4 nsid:babababa cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0xbabababababababa 00:06:11.668 [2024-05-15 10:58:08.749397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.668 [2024-05-15 10:58:08.749471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:5 nsid:4b4b4b4b cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4b4b4b4b4b 00:06:11.668 [2024-05-15 10:58:08.749486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.668 [2024-05-15 10:58:08.749547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (4b) qid:0 cid:6 nsid:4b4b4b4b cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x4b4b4b4bb2b4b4b4 00:06:11.668 [2024-05-15 10:58:08.749563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.668 #54 NEW cov: 12064 ft: 15333 corp: 40/5561b lim: 320 exec/s: 27 rss: 73Mb L: 222/297 MS: 1 InsertRepeatedBytes- 00:06:11.668 #54 DONE cov: 12064 ft: 15333 corp: 40/5561b lim: 320 exec/s: 27 rss: 73Mb 00:06:11.668 ###### Recommended dictionary. ###### 00:06:11.668 "\377\377\377\377" # Uses: 1 00:06:11.668 "\000\000\000\000" # Uses: 0 00:06:11.668 "\001\000\000\000" # Uses: 1 00:06:11.668 ###### End of recommended dictionary. ###### 00:06:11.668 Done 54 runs in 2 second(s) 00:06:11.668 [2024-05-15 10:58:08.769865] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:11.668 10:58:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:11.927 [2024-05-15 10:58:08.935005] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:11.927 [2024-05-15 10:58:08.935076] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399824 ] 00:06:11.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.927 [2024-05-15 10:58:09.189897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.185 [2024-05-15 10:58:09.273827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.185 [2024-05-15 10:58:09.333118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.185 [2024-05-15 10:58:09.349067] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:12.185 [2024-05-15 10:58:09.349516] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:12.185 INFO: Running with entropic power schedule (0xFF, 100). 00:06:12.185 INFO: Seed: 2519722759 00:06:12.185 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:12.185 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:12.185 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:12.185 INFO: A corpus is not provided, starting from an empty corpus 00:06:12.185 #2 INITED exec/s: 0 rss: 63Mb 00:06:12.185 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:12.185 This may also happen if the target rejected all inputs we tried so far 00:06:12.185 [2024-05-15 10:58:09.416668] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.185 [2024-05-15 10:58:09.416977] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.185 [2024-05-15 10:58:09.417273] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.185 [2024-05-15 10:58:09.417583] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.185 [2024-05-15 10:58:09.418109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.185 [2024-05-15 10:58:09.418151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.185 [2024-05-15 10:58:09.418236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.185 [2024-05-15 10:58:09.418252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.185 [2024-05-15 10:58:09.418338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.185 [2024-05-15 10:58:09.418355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.185 [2024-05-15 10:58:09.418443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.185 [2024-05-15 10:58:09.418460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.753 NEW_FUNC[1/686]: 0x482620 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:12.753 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:12.753 #4 NEW cov: 11853 ft: 11853 corp: 2/27b lim: 30 exec/s: 0 rss: 70Mb L: 26/26 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:12.753 [2024-05-15 10:58:09.746537] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.753 [2024-05-15 10:58:09.746707] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.753 [2024-05-15 10:58:09.746859] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.753 [2024-05-15 10:58:09.747008] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.753 [2024-05-15 10:58:09.747344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d2883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.753 [2024-05-15 10:58:09.747389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.753 [2024-05-15 10:58:09.747511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.753 [2024-05-15 10:58:09.747531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.753 [2024-05-15 10:58:09.747655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.753 [2024-05-15 10:58:09.747680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.753 [2024-05-15 10:58:09.747800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.753 [2024-05-15 10:58:09.747819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.753 #8 NEW cov: 11983 ft: 12587 corp: 3/51b lim: 30 exec/s: 0 rss: 70Mb L: 24/26 MS: 4 InsertRepeatedBytes-InsertByte-CopyPart-CrossOver- 00:06:12.753 [2024-05-15 10:58:09.786538] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:12.753 [2024-05-15 10:58:09.786708] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.786861] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.787009] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.787353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.787390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.787510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.787529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.787647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.787663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.787775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.787794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.754 #9 NEW cov: 12012 ft: 12815 corp: 4/80b lim: 30 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:12.754 [2024-05-15 10:58:09.836644] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2e 00:06:12.754 [2024-05-15 10:58:09.836977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.837005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.837123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.837141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.754 #12 NEW cov: 12114 ft: 13704 corp: 5/93b lim: 30 exec/s: 0 rss: 70Mb L: 13/29 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:12.754 [2024-05-15 10:58:09.876798] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.876976] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.877121] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.877266] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.877634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.877665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.877778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.877793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.877909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.877928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.878044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.878061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.754 #13 NEW cov: 12114 ft: 13811 corp: 6/119b lim: 30 exec/s: 0 rss: 71Mb L: 26/29 MS: 1 CopyPart- 00:06:12.754 [2024-05-15 10:58:09.916887] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.917046] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.917199] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.917349] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.917718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.917747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.917871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.917888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.918008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.918029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.918152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.918171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.754 #14 NEW cov: 12114 ft: 13890 corp: 7/145b lim: 30 exec/s: 0 rss: 71Mb L: 26/29 MS: 1 ChangeBit- 00:06:12.754 [2024-05-15 10:58:09.957047] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.957202] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.957357] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.957539] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:09.957875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d2883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.957904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.958025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.958041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.958164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.958183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:09.958309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:09.958326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.754 #15 NEW cov: 12114 ft: 13982 corp: 8/169b lim: 30 exec/s: 0 rss: 71Mb L: 24/29 MS: 1 CopyPart- 00:06:12.754 [2024-05-15 10:58:10.007289] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:10.007510] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff3a 00:06:12.754 [2024-05-15 10:58:10.007662] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:10.007815] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:12.754 [2024-05-15 10:58:10.008159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:10.008190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:10.008315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:10.008335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:10.008466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:10.008483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.754 [2024-05-15 10:58:10.008607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.754 [2024-05-15 10:58:10.008627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.014 #16 NEW cov: 12114 ft: 14112 corp: 9/195b lim: 30 exec/s: 0 rss: 71Mb L: 26/29 MS: 1 ChangeByte- 00:06:13.014 [2024-05-15 10:58:10.058713] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.058879] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.059043] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.059396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.059424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.059539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.059556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.059671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.059689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.014 #17 NEW cov: 12114 ft: 14344 corp: 10/216b lim: 30 exec/s: 0 rss: 71Mb L: 21/29 MS: 1 CrossOver- 00:06:13.014 [2024-05-15 10:58:10.097537] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.097691] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.097844] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002fff 00:06:13.014 [2024-05-15 10:58:10.097998] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.098331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.098359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.098485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.098504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.098618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.098635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.098749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.098766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.014 #18 NEW cov: 12114 ft: 14461 corp: 11/242b lim: 30 exec/s: 0 rss: 71Mb L: 26/29 MS: 1 ChangeByte- 00:06:13.014 [2024-05-15 10:58:10.147525] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.147709] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.148033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff833a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.148062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.148184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.148202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.014 #19 NEW cov: 12114 ft: 14517 corp: 12/259b lim: 30 exec/s: 0 rss: 72Mb L: 17/29 MS: 1 EraseBytes- 00:06:13.014 [2024-05-15 10:58:10.197787] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:13.014 [2024-05-15 10:58:10.197959] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.198122] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.198273] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.014 [2024-05-15 10:58:10.198623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.198656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.198780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.198798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.198923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.198946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.199079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.199098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.014 #20 NEW cov: 12114 ft: 14531 corp: 13/288b lim: 30 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 CopyPart- 00:06:13.014 [2024-05-15 10:58:10.247752] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xaf 00:06:13.014 [2024-05-15 10:58:10.247915] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:13.014 [2024-05-15 10:58:10.248083] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (179904) > buf size (4096) 00:06:13.014 [2024-05-15 10:58:10.248410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.248441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.248561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:afaf83af cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.248579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.014 [2024-05-15 10:58:10.248710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:afaf00af cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.014 [2024-05-15 10:58:10.248730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.274 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:13.274 #21 NEW cov: 12137 ft: 14563 corp: 14/311b lim: 30 exec/s: 0 rss: 72Mb L: 23/29 MS: 1 InsertRepeatedBytes- 00:06:13.274 [2024-05-15 10:58:10.297699] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.297866] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.298005] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.298146] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.298473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.298503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.298632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5dff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.298652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.298778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.298800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.298924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.298944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.274 #22 NEW cov: 12137 ft: 14586 corp: 15/338b lim: 30 exec/s: 0 rss: 72Mb L: 27/29 MS: 1 InsertByte- 00:06:13.274 [2024-05-15 10:58:10.338125] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.338288] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.338453] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.338786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.338816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.338940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff3a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.338959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.339083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.339100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.274 #23 NEW cov: 12137 ft: 14601 corp: 16/360b lim: 30 exec/s: 0 rss: 72Mb L: 22/29 MS: 1 CrossOver- 00:06:13.274 [2024-05-15 10:58:10.388334] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c7c7 00:06:13.274 [2024-05-15 10:58:10.388498] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c7c7 00:06:13.274 [2024-05-15 10:58:10.388650] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c7c7 00:06:13.274 [2024-05-15 10:58:10.388814] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c7c7 00:06:13.274 [2024-05-15 10:58:10.389149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:845d8328 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.389177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.389304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:c7c783c7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.389322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.389442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:c7c783c7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.389460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.389569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:c7c783c7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.389587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.274 #28 NEW cov: 12137 ft: 14621 corp: 17/389b lim: 30 exec/s: 28 rss: 72Mb L: 29/29 MS: 5 CrossOver-ShuffleBytes-InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:13.274 [2024-05-15 10:58:10.428470] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:13.274 [2024-05-15 10:58:10.428626] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.428779] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.428919] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.274 [2024-05-15 10:58:10.429247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.429277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.429405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8360 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.274 [2024-05-15 10:58:10.429423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.274 [2024-05-15 10:58:10.429546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.275 [2024-05-15 10:58:10.429564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.275 [2024-05-15 10:58:10.429684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.275 [2024-05-15 10:58:10.429703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.275 #29 NEW cov: 12137 ft: 14685 corp: 18/418b lim: 30 exec/s: 29 rss: 72Mb L: 29/29 MS: 1 ChangeByte- 00:06:13.275 [2024-05-15 10:58:10.478178] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.275 [2024-05-15 10:58:10.478342] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.275 [2024-05-15 10:58:10.478684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.275 [2024-05-15 10:58:10.478718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.275 [2024-05-15 10:58:10.478837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.275 [2024-05-15 10:58:10.478856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.275 #30 NEW cov: 12137 ft: 14764 corp: 19/432b lim: 30 exec/s: 30 rss: 72Mb L: 14/29 MS: 1 EraseBytes- 00:06:13.275 [2024-05-15 10:58:10.518615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.275 [2024-05-15 10:58:10.518643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.275 [2024-05-15 10:58:10.518760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.275 [2024-05-15 10:58:10.518780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.534 #31 NEW cov: 12137 ft: 14844 corp: 20/448b lim: 30 exec/s: 31 rss: 72Mb L: 16/29 MS: 1 CopyPart- 00:06:13.534 [2024-05-15 10:58:10.558532] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.558696] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.558850] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.559029] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.559388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.559418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.559538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff835d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.559558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.559687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff833a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.559707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.559836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.559855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.534 #32 NEW cov: 12137 ft: 14854 corp: 21/477b lim: 30 exec/s: 32 rss: 72Mb L: 29/29 MS: 1 CopyPart- 00:06:13.534 [2024-05-15 10:58:10.608746] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.608908] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.609065] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.609220] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.609566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d2883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.609597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.609725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.609745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.609867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.609888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.610005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.610025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.534 #33 NEW cov: 12137 ft: 14909 corp: 22/501b lim: 30 exec/s: 33 rss: 72Mb L: 24/29 MS: 1 ShuffleBytes- 00:06:13.534 [2024-05-15 10:58:10.659051] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.659222] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.659377] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.659535] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.659886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.659919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.660050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.660071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.660199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.660217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.660341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.660360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.534 #34 NEW cov: 12137 ft: 14913 corp: 23/530b lim: 30 exec/s: 34 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:06:13.534 [2024-05-15 10:58:10.709152] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.709315] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.709475] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.709822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.709851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.709971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff3a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.709990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.534 [2024-05-15 10:58:10.710101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff3a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.534 [2024-05-15 10:58:10.710119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.534 #35 NEW cov: 12137 ft: 14921 corp: 24/552b lim: 30 exec/s: 35 rss: 73Mb L: 22/29 MS: 1 CopyPart- 00:06:13.534 [2024-05-15 10:58:10.769399] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (828416) > buf size (4096) 00:06:13.534 [2024-05-15 10:58:10.769552] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.534 [2024-05-15 10:58:10.769714] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.535 [2024-05-15 10:58:10.770048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff833a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.535 [2024-05-15 10:58:10.770077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.535 [2024-05-15 10:58:10.770211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.535 [2024-05-15 10:58:10.770231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.535 [2024-05-15 10:58:10.770358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.535 [2024-05-15 10:58:10.770383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.535 #36 NEW cov: 12137 ft: 14940 corp: 25/573b lim: 30 exec/s: 36 rss: 73Mb L: 21/29 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:13.794 [2024-05-15 10:58:10.809519] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (828416) > buf size (4096) 00:06:13.794 [2024-05-15 10:58:10.809838] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.809991] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fffb 00:06:13.794 [2024-05-15 10:58:10.810330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff833a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.810360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.810480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.810499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.810618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.810635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.810748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.810765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.794 #37 NEW cov: 12137 ft: 14961 corp: 26/598b lim: 30 exec/s: 37 rss: 73Mb L: 25/29 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:13.794 [2024-05-15 10:58:10.859330] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.859681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.859710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.794 #38 NEW cov: 12137 ft: 15314 corp: 27/609b lim: 30 exec/s: 38 rss: 73Mb L: 11/29 MS: 1 EraseBytes- 00:06:13.794 [2024-05-15 10:58:10.899765] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.899920] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.900067] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.900224] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.900577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.900605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.900726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5dff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.900745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.900868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.900889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.901013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.901032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.794 #39 NEW cov: 12137 ft: 15327 corp: 28/636b lim: 30 exec/s: 39 rss: 73Mb L: 27/29 MS: 1 ShuffleBytes- 00:06:13.794 [2024-05-15 10:58:10.939897] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000028ff 00:06:13.794 [2024-05-15 10:58:10.940061] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.940221] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002fff 00:06:13.794 [2024-05-15 10:58:10.940388] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.940740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.940768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.940901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.940920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.941046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.941065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.941188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.941206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.794 #40 NEW cov: 12137 ft: 15373 corp: 29/662b lim: 30 exec/s: 40 rss: 73Mb L: 26/29 MS: 1 CrossOver- 00:06:13.794 [2024-05-15 10:58:10.989928] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.990094] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.990253] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.990419] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:10.990757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d2883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.990784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.990903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.990919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.991042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.991060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:10.991177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:10.991199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.794 #41 NEW cov: 12137 ft: 15384 corp: 30/686b lim: 30 exec/s: 41 rss: 73Mb L: 24/29 MS: 1 ShuffleBytes- 00:06:13.794 [2024-05-15 10:58:11.040099] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (828416) > buf size (4096) 00:06:13.794 [2024-05-15 10:58:11.040265] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:13.794 [2024-05-15 10:58:11.040612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff833a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:11.040643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.794 [2024-05-15 10:58:11.040763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.794 [2024-05-15 10:58:11.040781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.054 #42 NEW cov: 12137 ft: 15395 corp: 31/700b lim: 30 exec/s: 42 rss: 73Mb L: 14/29 MS: 1 EraseBytes- 00:06:14.054 [2024-05-15 10:58:11.080366] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:14.054 [2024-05-15 10:58:11.080563] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.080724] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff1d 00:06:14.054 [2024-05-15 10:58:11.080880] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.081241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.081270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.081387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8360 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.081404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.081515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.081532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.081649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.081665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.054 #43 NEW cov: 12137 ft: 15463 corp: 32/729b lim: 30 exec/s: 43 rss: 73Mb L: 29/29 MS: 1 ChangeBinInt- 00:06:14.054 [2024-05-15 10:58:11.130244] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:06:14.054 [2024-05-15 10:58:11.130429] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.130748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff003a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.130776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.130901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:96ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.130917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.054 #44 NEW cov: 12137 ft: 15472 corp: 33/746b lim: 30 exec/s: 44 rss: 73Mb L: 17/29 MS: 1 CMP- DE: "\000\000\001\226"- 00:06:14.054 [2024-05-15 10:58:11.170548] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:14.054 [2024-05-15 10:58:11.170715] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.170873] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.171035] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.171386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.171414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.171538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.171555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.171675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.171693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.171816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.171836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.054 #45 NEW cov: 12137 ft: 15477 corp: 34/775b lim: 30 exec/s: 45 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:06:14.054 [2024-05-15 10:58:11.210686] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.210843] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.210990] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.211131] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.054 [2024-05-15 10:58:11.211499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d2883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.211528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.211646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:baff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.211665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.211785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.211802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.054 [2024-05-15 10:58:11.211921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.054 [2024-05-15 10:58:11.211940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.054 #46 NEW cov: 12137 ft: 15503 corp: 35/800b lim: 30 exec/s: 46 rss: 73Mb L: 25/29 MS: 1 InsertByte- 00:06:14.055 [2024-05-15 10:58:11.250899] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:14.055 [2024-05-15 10:58:11.251054] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.251205] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.251354] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.251529] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.251872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.251901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.252020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.252040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.252161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.252181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.252298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.252318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.252451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.252468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:14.055 #47 NEW cov: 12137 ft: 15541 corp: 36/830b lim: 30 exec/s: 47 rss: 73Mb L: 30/30 MS: 1 CopyPart- 00:06:14.055 [2024-05-15 10:58:11.290961] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.291122] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.291265] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:06:14.055 [2024-05-15 10:58:11.291424] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.055 [2024-05-15 10:58:11.291780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.291809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.291936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.291955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.292085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.292103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.055 [2024-05-15 10:58:11.292233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.055 [2024-05-15 10:58:11.292254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.055 #48 NEW cov: 12137 ft: 15624 corp: 37/857b lim: 30 exec/s: 48 rss: 73Mb L: 27/30 MS: 1 InsertRepeatedBytes- 00:06:14.314 [2024-05-15 10:58:11.331029] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41984) > buf size (4096) 00:06:14.314 [2024-05-15 10:58:11.331210] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.314 [2024-05-15 10:58:11.331365] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.314 [2024-05-15 10:58:11.331518] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.314 [2024-05-15 10:58:11.331864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.314 [2024-05-15 10:58:11.331895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.315 [2024-05-15 10:58:11.332011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.332029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.315 [2024-05-15 10:58:11.332142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.332162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.315 [2024-05-15 10:58:11.332291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.332310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.315 #49 NEW cov: 12137 ft: 15630 corp: 38/885b lim: 30 exec/s: 49 rss: 73Mb L: 28/30 MS: 1 EraseBytes- 00:06:14.315 [2024-05-15 10:58:11.381150] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.315 [2024-05-15 10:58:11.381304] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.315 [2024-05-15 10:58:11.381470] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.315 [2024-05-15 10:58:11.381629] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.315 [2024-05-15 10:58:11.381972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5d2883ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.382001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.315 [2024-05-15 10:58:11.382121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.382139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.315 [2024-05-15 10:58:11.382262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83af cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.382282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.315 [2024-05-15 10:58:11.382407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.315 [2024-05-15 10:58:11.382424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.315 #50 NEW cov: 12137 ft: 15634 corp: 39/910b lim: 30 exec/s: 25 rss: 74Mb L: 25/30 MS: 1 InsertByte- 00:06:14.315 #50 DONE cov: 12137 ft: 15634 corp: 39/910b lim: 30 exec/s: 25 rss: 74Mb 00:06:14.315 ###### Recommended dictionary. ###### 00:06:14.315 "\000\000\000\000" # Uses: 1 00:06:14.315 "\000\000\001\226" # Uses: 0 00:06:14.315 ###### End of recommended dictionary. ###### 00:06:14.315 Done 50 runs in 2 second(s) 00:06:14.315 [2024-05-15 10:58:11.410957] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:14.315 10:58:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:14.574 [2024-05-15 10:58:11.580682] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:14.574 [2024-05-15 10:58:11.580756] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400363 ] 00:06:14.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.574 [2024-05-15 10:58:11.834085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.833 [2024-05-15 10:58:11.922788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.833 [2024-05-15 10:58:11.981702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.833 [2024-05-15 10:58:11.997665] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:14.833 [2024-05-15 10:58:11.998089] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:14.833 INFO: Running with entropic power schedule (0xFF, 100). 00:06:14.833 INFO: Seed: 874761420 00:06:14.833 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:14.833 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:14.833 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:14.833 INFO: A corpus is not provided, starting from an empty corpus 00:06:14.833 #2 INITED exec/s: 0 rss: 64Mb 00:06:14.833 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:14.833 This may also happen if the target rejected all inputs we tried so far 00:06:14.833 [2024-05-15 10:58:12.046809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.833 [2024-05-15 10:58:12.046837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.092 NEW_FUNC[1/685]: 0x4850d0 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:15.092 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:15.092 #9 NEW cov: 11807 ft: 11810 corp: 2/11b lim: 35 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:15.092 [2024-05-15 10:58:12.358038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.092 [2024-05-15 10:58:12.358072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.092 [2024-05-15 10:58:12.358131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.092 [2024-05-15 10:58:12.358146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.092 [2024-05-15 10:58:12.358201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.092 [2024-05-15 10:58:12.358215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.351 #12 NEW cov: 11939 ft: 12669 corp: 3/32b lim: 35 exec/s: 0 rss: 70Mb L: 21/21 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:15.352 [2024-05-15 10:58:12.397663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.397690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.352 #13 NEW cov: 11945 ft: 12911 corp: 4/42b lim: 35 exec/s: 0 rss: 70Mb L: 10/21 MS: 1 ChangeBinInt- 00:06:15.352 [2024-05-15 10:58:12.447805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.447831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.352 #14 NEW cov: 12030 ft: 13094 corp: 5/55b lim: 35 exec/s: 0 rss: 70Mb L: 13/21 MS: 1 EraseBytes- 00:06:15.352 [2024-05-15 10:58:12.497992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.498018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.352 #15 NEW cov: 12030 ft: 13247 corp: 6/65b lim: 35 exec/s: 0 rss: 70Mb L: 10/21 MS: 1 ChangeBit- 00:06:15.352 [2024-05-15 10:58:12.538362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.538393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.352 [2024-05-15 10:58:12.538453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.538468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.352 [2024-05-15 10:58:12.538530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:38380038 cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.538544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.352 #16 NEW cov: 12030 ft: 13365 corp: 7/88b lim: 35 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:06:15.352 [2024-05-15 10:58:12.588565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.588591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.352 [2024-05-15 10:58:12.588645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.588659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.352 [2024-05-15 10:58:12.588713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.588726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.352 [2024-05-15 10:58:12.588779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.352 [2024-05-15 10:58:12.588792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.611 #17 NEW cov: 12030 ft: 13964 corp: 8/119b lim: 35 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 CrossOver- 00:06:15.611 [2024-05-15 10:58:12.638744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.611 [2024-05-15 10:58:12.638769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.611 [2024-05-15 10:58:12.638828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.611 [2024-05-15 10:58:12.638843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.611 [2024-05-15 10:58:12.638897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.611 [2024-05-15 10:58:12.638910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.611 [2024-05-15 10:58:12.638967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383800ff cdw11:34003834 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.611 [2024-05-15 10:58:12.638980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.612 #18 NEW cov: 12030 ft: 14030 corp: 9/150b lim: 35 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 ChangeASCIIInt- 00:06:15.612 [2024-05-15 10:58:12.688895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.688920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.688994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:2500ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.689009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.689070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.689084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.689144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff3800ff cdw11:34003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.689158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.612 #19 NEW cov: 12030 ft: 14084 corp: 10/182b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 InsertByte- 00:06:15.612 [2024-05-15 10:58:12.738758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3bff002e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.738783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.738843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.738857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.612 #24 NEW cov: 12030 ft: 14325 corp: 11/201b lim: 35 exec/s: 0 rss: 71Mb L: 19/32 MS: 5 ChangeByte-InsertByte-CrossOver-InsertByte-InsertRepeatedBytes- 00:06:15.612 [2024-05-15 10:58:12.779024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.779049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.779125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.779140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.779202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.779215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.612 #25 NEW cov: 12030 ft: 14352 corp: 12/224b lim: 35 exec/s: 0 rss: 71Mb L: 23/32 MS: 1 CopyPart- 00:06:15.612 [2024-05-15 10:58:12.818812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.818838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.612 #26 NEW cov: 12030 ft: 14377 corp: 13/234b lim: 35 exec/s: 0 rss: 71Mb L: 10/32 MS: 1 CopyPart- 00:06:15.612 [2024-05-15 10:58:12.859350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.859376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.859441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.859456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.859513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.859529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.612 [2024-05-15 10:58:12.859591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.612 [2024-05-15 10:58:12.859605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.871 #27 NEW cov: 12030 ft: 14384 corp: 14/265b lim: 35 exec/s: 0 rss: 71Mb L: 31/32 MS: 1 ChangeBit- 00:06:15.871 [2024-05-15 10:58:12.899092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff001dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.871 [2024-05-15 10:58:12.899118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.871 #28 NEW cov: 12030 ft: 14391 corp: 15/278b lim: 35 exec/s: 0 rss: 71Mb L: 13/32 MS: 1 ChangeByte- 00:06:15.871 [2024-05-15 10:58:12.939503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.871 [2024-05-15 10:58:12.939529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.871 [2024-05-15 10:58:12.939591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.871 [2024-05-15 10:58:12.939605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.871 [2024-05-15 10:58:12.939666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.871 [2024-05-15 10:58:12.939680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.871 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:15.871 #29 NEW cov: 12053 ft: 14523 corp: 16/301b lim: 35 exec/s: 0 rss: 71Mb L: 23/32 MS: 1 ChangeByte- 00:06:15.871 [2024-05-15 10:58:12.989467] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:15.871 [2024-05-15 10:58:12.989598] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:15.871 [2024-05-15 10:58:12.989709] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:15.871 [2024-05-15 10:58:12.989934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.871 [2024-05-15 10:58:12.989959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:12.990020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:12.990034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:12.990091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:12.990105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:12.990162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:12.990179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:12.990236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:12.990256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:15.872 #30 NEW cov: 12062 ft: 14655 corp: 17/336b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:15.872 [2024-05-15 10:58:13.039948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:13.039972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:13.040048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:2500ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:13.040062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:13.040120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:13.040133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.872 [2024-05-15 10:58:13.040190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:3400ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:13.040204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.872 #31 NEW cov: 12062 ft: 14674 corp: 18/368b lim: 35 exec/s: 31 rss: 72Mb L: 32/35 MS: 1 CopyPart- 00:06:15.872 [2024-05-15 10:58:13.089625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00fe cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.872 [2024-05-15 10:58:13.089650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.872 #32 NEW cov: 12062 ft: 14747 corp: 19/378b lim: 35 exec/s: 32 rss: 72Mb L: 10/35 MS: 1 ChangeBit- 00:06:16.199 [2024-05-15 10:58:13.140192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.140219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.140275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.140290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.140344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff2500ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.140358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.140409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff3800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.140422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.199 #33 NEW cov: 12062 ft: 14755 corp: 20/410b lim: 35 exec/s: 33 rss: 72Mb L: 32/35 MS: 1 InsertByte- 00:06:16.199 [2024-05-15 10:58:13.190172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.190197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.190276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff007f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.190290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.190349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.190363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.199 #34 NEW cov: 12062 ft: 14765 corp: 21/433b lim: 35 exec/s: 34 rss: 72Mb L: 23/35 MS: 1 ChangeBit- 00:06:16.199 [2024-05-15 10:58:13.240428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.240454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.240516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.240530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.240588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff380038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.240602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.240660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:3400ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.240674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.199 #35 NEW cov: 12062 ft: 14788 corp: 22/465b lim: 35 exec/s: 35 rss: 72Mb L: 32/35 MS: 1 ShuffleBytes- 00:06:16.199 [2024-05-15 10:58:13.290562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.290587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.290645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:2500ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.290659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.290716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:38ff0038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.290729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.290784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff3800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.290798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.199 #36 NEW cov: 12062 ft: 14832 corp: 23/497b lim: 35 exec/s: 36 rss: 72Mb L: 32/35 MS: 1 ShuffleBytes- 00:06:16.199 [2024-05-15 10:58:13.340462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.340487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 [2024-05-15 10:58:13.340548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.340565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.199 #37 NEW cov: 12062 ft: 14837 corp: 24/515b lim: 35 exec/s: 37 rss: 72Mb L: 18/35 MS: 1 CrossOver- 00:06:16.199 [2024-05-15 10:58:13.380410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.380435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 #38 NEW cov: 12062 ft: 14847 corp: 25/526b lim: 35 exec/s: 38 rss: 72Mb L: 11/35 MS: 1 InsertByte- 00:06:16.199 [2024-05-15 10:58:13.430571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.199 [2024-05-15 10:58:13.430595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.199 #39 NEW cov: 12062 ft: 14888 corp: 26/537b lim: 35 exec/s: 39 rss: 72Mb L: 11/35 MS: 1 EraseBytes- 00:06:16.480 [2024-05-15 10:58:13.470980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.471006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.471068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.471082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.471141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.471155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.480 #40 NEW cov: 12062 ft: 14926 corp: 27/562b lim: 35 exec/s: 40 rss: 72Mb L: 25/35 MS: 1 CopyPart- 00:06:16.480 [2024-05-15 10:58:13.511048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.511074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.511135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:38380038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.511149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.511209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.511223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.480 #43 NEW cov: 12062 ft: 14970 corp: 28/587b lim: 35 exec/s: 43 rss: 72Mb L: 25/35 MS: 3 InsertByte-ChangeBit-CrossOver- 00:06:16.480 [2024-05-15 10:58:13.551329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.551355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.551414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.551431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.551485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.551499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.551553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.551567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.480 #44 NEW cov: 12062 ft: 14995 corp: 29/615b lim: 35 exec/s: 44 rss: 72Mb L: 28/35 MS: 1 EraseBytes- 00:06:16.480 [2024-05-15 10:58:13.591395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.591421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.591496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.591509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.591565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.591578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.591637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383400ff cdw11:33003330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.591651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.480 #45 NEW cov: 12062 ft: 15011 corp: 30/643b lim: 35 exec/s: 45 rss: 72Mb L: 28/35 MS: 1 ChangeASCIIInt- 00:06:16.480 [2024-05-15 10:58:13.641179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:3400ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.641205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 #46 NEW cov: 12062 ft: 15024 corp: 31/654b lim: 35 exec/s: 46 rss: 72Mb L: 11/35 MS: 1 CrossOver- 00:06:16.480 [2024-05-15 10:58:13.691574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.691600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.691664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.691678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.691735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.691748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.480 #47 NEW cov: 12062 ft: 15036 corp: 32/675b lim: 35 exec/s: 47 rss: 72Mb L: 21/35 MS: 1 CrossOver- 00:06:16.480 [2024-05-15 10:58:13.731729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.480 [2024-05-15 10:58:13.731758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.480 [2024-05-15 10:58:13.731814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:38380038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.481 [2024-05-15 10:58:13.731828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.481 [2024-05-15 10:58:13.731883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.481 [2024-05-15 10:58:13.731896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.740 #48 NEW cov: 12062 ft: 15041 corp: 33/700b lim: 35 exec/s: 48 rss: 73Mb L: 25/35 MS: 1 CrossOver- 00:06:16.740 [2024-05-15 10:58:13.781845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.781872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.781932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.781947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.782004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.782018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.740 #49 NEW cov: 12062 ft: 15109 corp: 34/725b lim: 35 exec/s: 49 rss: 73Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:16.740 [2024-05-15 10:58:13.832119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.832145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.832203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3900ff30 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.832218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.832275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.832288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.832346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.832360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.740 #50 NEW cov: 12062 ft: 15120 corp: 35/756b lim: 35 exec/s: 50 rss: 73Mb L: 31/35 MS: 1 ChangeASCIIInt- 00:06:16.740 [2024-05-15 10:58:13.872222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:78ff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.872247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.872329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.872347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.872420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:38ff0038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.872434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.872488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.872502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.740 #51 NEW cov: 12062 ft: 15132 corp: 36/789b lim: 35 exec/s: 51 rss: 73Mb L: 33/35 MS: 1 InsertByte- 00:06:16.740 [2024-05-15 10:58:13.912292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.912319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.912399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:47470047 cdw11:47004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.912413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.912468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:47470047 cdw11:ff004747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.912482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.912537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.912551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.740 #52 NEW cov: 12062 ft: 15146 corp: 37/818b lim: 35 exec/s: 52 rss: 73Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:06:16.740 [2024-05-15 10:58:13.952418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.952444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.740 [2024-05-15 10:58:13.952501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.740 [2024-05-15 10:58:13.952515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.741 [2024-05-15 10:58:13.952575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:3800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.741 [2024-05-15 10:58:13.952589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.741 [2024-05-15 10:58:13.952648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:38380038 cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.741 [2024-05-15 10:58:13.952662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.741 #53 NEW cov: 12062 ft: 15189 corp: 38/849b lim: 35 exec/s: 53 rss: 73Mb L: 31/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:16.741 [2024-05-15 10:58:13.992527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.741 [2024-05-15 10:58:13.992556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.741 [2024-05-15 10:58:13.992613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:3800ff38 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.741 [2024-05-15 10:58:13.992627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.741 [2024-05-15 10:58:13.992683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.741 [2024-05-15 10:58:13.992697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.741 [2024-05-15 10:58:13.992757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:383800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.741 [2024-05-15 10:58:13.992770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.000 #54 NEW cov: 12062 ft: 15200 corp: 39/880b lim: 35 exec/s: 54 rss: 73Mb L: 31/35 MS: 1 CopyPart- 00:06:17.000 [2024-05-15 10:58:14.032651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.000 [2024-05-15 10:58:14.032676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.000 [2024-05-15 10:58:14.032736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.001 [2024-05-15 10:58:14.032751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.001 [2024-05-15 10:58:14.032805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:38380038 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.001 [2024-05-15 10:58:14.032819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.001 [2024-05-15 10:58:14.032875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff3800ff cdw11:38003838 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.001 [2024-05-15 10:58:14.032889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.001 #55 NEW cov: 12062 ft: 15209 corp: 40/912b lim: 35 exec/s: 27 rss: 73Mb L: 32/35 MS: 1 CrossOver- 00:06:17.001 #55 DONE cov: 12062 ft: 15209 corp: 40/912b lim: 35 exec/s: 27 rss: 73Mb 00:06:17.001 ###### Recommended dictionary. ###### 00:06:17.001 "\377\377\377\377\377\377\377\377" # Uses: 0 00:06:17.001 ###### End of recommended dictionary. ###### 00:06:17.001 Done 55 runs in 2 second(s) 00:06:17.001 [2024-05-15 10:58:14.062975] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:17.001 10:58:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:17.001 [2024-05-15 10:58:14.232717] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:17.001 [2024-05-15 10:58:14.232804] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400900 ] 00:06:17.001 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.260 [2024-05-15 10:58:14.486150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.520 [2024-05-15 10:58:14.579168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.520 [2024-05-15 10:58:14.638318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.520 [2024-05-15 10:58:14.654267] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:17.520 [2024-05-15 10:58:14.654708] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:17.520 INFO: Running with entropic power schedule (0xFF, 100). 00:06:17.520 INFO: Seed: 3531745605 00:06:17.520 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:17.520 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:17.520 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:17.520 INFO: A corpus is not provided, starting from an empty corpus 00:06:17.520 #2 INITED exec/s: 0 rss: 63Mb 00:06:17.520 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:17.520 This may also happen if the target rejected all inputs we tried so far 00:06:17.778 NEW_FUNC[1/674]: 0x486da0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:17.778 NEW_FUNC[2/674]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:17.779 #5 NEW cov: 11723 ft: 11724 corp: 2/13b lim: 20 exec/s: 0 rss: 70Mb L: 12/12 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:18.038 #6 NEW cov: 11853 ft: 12323 corp: 3/26b lim: 20 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 InsertByte- 00:06:18.038 #7 NEW cov: 11859 ft: 12493 corp: 4/39b lim: 20 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CopyPart- 00:06:18.038 #8 NEW cov: 11944 ft: 12788 corp: 5/51b lim: 20 exec/s: 0 rss: 70Mb L: 12/13 MS: 1 ShuffleBytes- 00:06:18.038 #9 NEW cov: 11944 ft: 12951 corp: 6/64b lim: 20 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:06:18.038 #10 NEW cov: 11960 ft: 13217 corp: 7/84b lim: 20 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:06:18.038 #11 NEW cov: 11960 ft: 13270 corp: 8/104b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 ChangeByte- 00:06:18.296 #17 NEW cov: 11960 ft: 13314 corp: 9/124b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 CopyPart- 00:06:18.297 #18 NEW cov: 11960 ft: 13345 corp: 10/136b lim: 20 exec/s: 0 rss: 71Mb L: 12/20 MS: 1 ChangeBit- 00:06:18.297 #19 NEW cov: 11960 ft: 13748 corp: 11/143b lim: 20 exec/s: 0 rss: 71Mb L: 7/20 MS: 1 EraseBytes- 00:06:18.297 #20 NEW cov: 11960 ft: 13756 corp: 12/156b lim: 20 exec/s: 0 rss: 71Mb L: 13/20 MS: 1 ChangeBinInt- 00:06:18.297 #21 NEW cov: 11960 ft: 13768 corp: 13/176b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:18.297 #22 NEW cov: 11961 ft: 13805 corp: 14/195b lim: 20 exec/s: 0 rss: 71Mb L: 19/20 MS: 1 CrossOver- 00:06:18.555 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:18.555 #23 NEW cov: 11984 ft: 13871 corp: 15/208b lim: 20 exec/s: 0 rss: 71Mb L: 13/20 MS: 1 ShuffleBytes- 00:06:18.555 #24 NEW cov: 11984 ft: 13882 corp: 16/227b lim: 20 exec/s: 0 rss: 71Mb L: 19/20 MS: 1 ChangeBit- 00:06:18.555 #25 NEW cov: 11984 ft: 13914 corp: 17/239b lim: 20 exec/s: 25 rss: 71Mb L: 12/20 MS: 1 ChangeByte- 00:06:18.556 #26 NEW cov: 11984 ft: 14005 corp: 18/253b lim: 20 exec/s: 26 rss: 72Mb L: 14/20 MS: 1 InsertByte- 00:06:18.556 #27 NEW cov: 11984 ft: 14022 corp: 19/273b lim: 20 exec/s: 27 rss: 72Mb L: 20/20 MS: 1 CMP- DE: "\373\012\000\2173\373\205\000"- 00:06:18.815 #28 NEW cov: 11984 ft: 14042 corp: 20/293b lim: 20 exec/s: 28 rss: 72Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:18.815 #29 NEW cov: 11984 ft: 14087 corp: 21/300b lim: 20 exec/s: 29 rss: 72Mb L: 7/20 MS: 1 CrossOver- 00:06:18.815 #30 NEW cov: 11984 ft: 14099 corp: 22/320b lim: 20 exec/s: 30 rss: 72Mb L: 20/20 MS: 1 ChangeASCIIInt- 00:06:18.815 #31 NEW cov: 11984 ft: 14164 corp: 23/333b lim: 20 exec/s: 31 rss: 72Mb L: 13/20 MS: 1 ShuffleBytes- 00:06:18.815 #32 NEW cov: 11984 ft: 14167 corp: 24/352b lim: 20 exec/s: 32 rss: 72Mb L: 19/20 MS: 1 ChangeBinInt- 00:06:18.815 #33 NEW cov: 11984 ft: 14189 corp: 25/364b lim: 20 exec/s: 33 rss: 72Mb L: 12/20 MS: 1 PersAutoDict- DE: "\373\012\000\2173\373\205\000"- 00:06:19.073 #36 NEW cov: 11985 ft: 14460 corp: 26/375b lim: 20 exec/s: 36 rss: 72Mb L: 11/20 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:19.073 #37 NEW cov: 11985 ft: 14476 corp: 27/394b lim: 20 exec/s: 37 rss: 72Mb L: 19/20 MS: 1 EraseBytes- 00:06:19.073 #38 NEW cov: 11985 ft: 14527 corp: 28/406b lim: 20 exec/s: 38 rss: 72Mb L: 12/20 MS: 1 ChangeBinInt- 00:06:19.073 #39 NEW cov: 11985 ft: 14610 corp: 29/426b lim: 20 exec/s: 39 rss: 72Mb L: 20/20 MS: 1 CopyPart- 00:06:19.073 #40 NEW cov: 11985 ft: 14612 corp: 30/441b lim: 20 exec/s: 40 rss: 72Mb L: 15/20 MS: 1 CrossOver- 00:06:19.073 #41 NEW cov: 11985 ft: 14614 corp: 31/461b lim: 20 exec/s: 41 rss: 72Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:19.332 #42 NEW cov: 11985 ft: 14623 corp: 32/475b lim: 20 exec/s: 42 rss: 72Mb L: 14/20 MS: 1 ChangeByte- 00:06:19.332 #43 NEW cov: 11985 ft: 14625 corp: 33/494b lim: 20 exec/s: 43 rss: 72Mb L: 19/20 MS: 1 ChangeByte- 00:06:19.332 #44 NEW cov: 11985 ft: 14629 corp: 34/506b lim: 20 exec/s: 44 rss: 72Mb L: 12/20 MS: 1 ChangeByte- 00:06:19.332 #45 NEW cov: 11985 ft: 14667 corp: 35/519b lim: 20 exec/s: 45 rss: 73Mb L: 13/20 MS: 1 ChangeByte- 00:06:19.332 #46 NEW cov: 11985 ft: 14682 corp: 36/539b lim: 20 exec/s: 46 rss: 73Mb L: 20/20 MS: 1 PersAutoDict- DE: "\373\012\000\2173\373\205\000"- 00:06:19.332 #47 NEW cov: 11985 ft: 14690 corp: 37/553b lim: 20 exec/s: 47 rss: 73Mb L: 14/20 MS: 1 InsertByte- 00:06:19.592 #48 NEW cov: 11985 ft: 14699 corp: 38/566b lim: 20 exec/s: 48 rss: 73Mb L: 13/20 MS: 1 ChangeBinInt- 00:06:19.592 #49 NEW cov: 11985 ft: 14711 corp: 39/586b lim: 20 exec/s: 49 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:06:19.592 #50 NEW cov: 11985 ft: 14726 corp: 40/606b lim: 20 exec/s: 25 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:06:19.592 #50 DONE cov: 11985 ft: 14726 corp: 40/606b lim: 20 exec/s: 25 rss: 73Mb 00:06:19.592 ###### Recommended dictionary. ###### 00:06:19.592 "\373\012\000\2173\373\205\000" # Uses: 2 00:06:19.592 ###### End of recommended dictionary. ###### 00:06:19.592 Done 50 runs in 2 second(s) 00:06:19.592 [2024-05-15 10:58:16.735327] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:19.592 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:19.851 10:58:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:19.851 [2024-05-15 10:58:16.905014] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:19.851 [2024-05-15 10:58:16.905111] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401244 ] 00:06:19.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.111 [2024-05-15 10:58:17.165900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.111 [2024-05-15 10:58:17.255368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.111 [2024-05-15 10:58:17.314466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.111 [2024-05-15 10:58:17.330410] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:20.111 [2024-05-15 10:58:17.330825] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:20.111 INFO: Running with entropic power schedule (0xFF, 100). 00:06:20.111 INFO: Seed: 1912777870 00:06:20.111 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:20.111 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:20.111 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:20.111 INFO: A corpus is not provided, starting from an empty corpus 00:06:20.111 #2 INITED exec/s: 0 rss: 63Mb 00:06:20.111 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:20.111 This may also happen if the target rejected all inputs we tried so far 00:06:20.370 [2024-05-15 10:58:17.386627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.370 [2024-05-15 10:58:17.386655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.370 [2024-05-15 10:58:17.386708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.370 [2024-05-15 10:58:17.386725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.370 [2024-05-15 10:58:17.386777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.370 [2024-05-15 10:58:17.386790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.370 [2024-05-15 10:58:17.386842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.370 [2024-05-15 10:58:17.386855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.629 NEW_FUNC[1/686]: 0x487e90 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:20.629 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:20.629 #19 NEW cov: 11830 ft: 11824 corp: 2/35b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:20.629 [2024-05-15 10:58:17.697629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.629 [2024-05-15 10:58:17.697669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.629 [2024-05-15 10:58:17.697734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e232e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.629 [2024-05-15 10:58:17.697752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.629 [2024-05-15 10:58:17.697815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.629 [2024-05-15 10:58:17.697833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.629 [2024-05-15 10:58:17.697894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.629 [2024-05-15 10:58:17.697911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.629 [2024-05-15 10:58:17.697972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.629 [2024-05-15 10:58:17.697989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:20.629 #20 NEW cov: 11960 ft: 12490 corp: 3/70b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:06:20.629 [2024-05-15 10:58:17.747439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.629 [2024-05-15 10:58:17.747465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.629 [2024-05-15 10:58:17.747521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.747534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.747590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:d1d12edb cdw11:d12e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.747602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.747661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.747674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.630 #21 NEW cov: 11966 ft: 12840 corp: 4/104b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:20.630 [2024-05-15 10:58:17.787543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.787568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.787641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.787655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.787712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.787725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.787780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.787794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.630 #22 NEW cov: 12051 ft: 13058 corp: 5/138b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 CrossOver- 00:06:20.630 [2024-05-15 10:58:17.837542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.837568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.837627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.837641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.837695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.837708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.630 #23 NEW cov: 12051 ft: 13451 corp: 6/160b lim: 35 exec/s: 0 rss: 70Mb L: 22/35 MS: 1 EraseBytes- 00:06:20.630 [2024-05-15 10:58:17.887657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.887682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.887741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.887755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.630 [2024-05-15 10:58:17.887811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2edc cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.630 [2024-05-15 10:58:17.887825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.889 #24 NEW cov: 12051 ft: 13578 corp: 7/182b lim: 35 exec/s: 0 rss: 71Mb L: 22/35 MS: 1 ChangeByte- 00:06:20.889 [2024-05-15 10:58:17.937793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.937818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.889 [2024-05-15 10:58:17.937892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e232e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.937906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.889 [2024-05-15 10:58:17.937963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.937976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.889 #25 NEW cov: 12051 ft: 13644 corp: 8/209b lim: 35 exec/s: 0 rss: 71Mb L: 27/35 MS: 1 EraseBytes- 00:06:20.889 [2024-05-15 10:58:17.978187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.978213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.889 [2024-05-15 10:58:17.978284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.978298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.889 [2024-05-15 10:58:17.978353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.978366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.889 [2024-05-15 10:58:17.978424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.978438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.889 [2024-05-15 10:58:17.978501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:17.978514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:20.889 #26 NEW cov: 12051 ft: 13671 corp: 9/244b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 InsertByte- 00:06:20.889 [2024-05-15 10:58:18.018156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.889 [2024-05-15 10:58:18.018182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.018239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.018252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.018308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.018322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.018383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.018396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.890 #27 NEW cov: 12051 ft: 13703 corp: 10/278b lim: 35 exec/s: 0 rss: 71Mb L: 34/35 MS: 1 ChangeByte- 00:06:20.890 [2024-05-15 10:58:18.068248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.068273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.068346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.068361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.068420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.068434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.068499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.068513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.890 #28 NEW cov: 12051 ft: 13775 corp: 11/312b lim: 35 exec/s: 0 rss: 71Mb L: 34/35 MS: 1 CrossOver- 00:06:20.890 [2024-05-15 10:58:18.108393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.108418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.108472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.108485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.108540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.108553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.890 [2024-05-15 10:58:18.108606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:d1d12ed8 cdw11:d12e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:20.890 [2024-05-15 10:58:18.108619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.890 #29 NEW cov: 12051 ft: 13819 corp: 12/346b lim: 35 exec/s: 0 rss: 71Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:21.149 [2024-05-15 10:58:18.158398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.158423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.158483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.158496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.158553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2edc cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.158567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.149 #30 NEW cov: 12051 ft: 13831 corp: 13/368b lim: 35 exec/s: 0 rss: 71Mb L: 22/35 MS: 1 ChangeBit- 00:06:21.149 [2024-05-15 10:58:18.208666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.208692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.208749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.208763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.208818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:d1d12edb cdw11:d12e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.208831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.208888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.208902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.149 #31 NEW cov: 12051 ft: 13877 corp: 14/402b lim: 35 exec/s: 0 rss: 71Mb L: 34/35 MS: 1 CopyPart- 00:06:21.149 [2024-05-15 10:58:18.248688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.248713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.248771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.248784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.248843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2c2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.248856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.149 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:21.149 #32 NEW cov: 12074 ft: 13951 corp: 15/424b lim: 35 exec/s: 0 rss: 71Mb L: 22/35 MS: 1 ChangeBit- 00:06:21.149 [2024-05-15 10:58:18.289086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.289112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.289185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2c2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.289199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.289256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.289272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.289328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.289341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.289402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.289416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.149 #33 NEW cov: 12074 ft: 14013 corp: 16/459b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:06:21.149 [2024-05-15 10:58:18.338910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.338936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.339011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2ecf cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.339026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.339088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2c2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.339102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.149 #39 NEW cov: 12074 ft: 14034 corp: 17/481b lim: 35 exec/s: 39 rss: 71Mb L: 22/35 MS: 1 ChangeByte- 00:06:21.149 [2024-05-15 10:58:18.389231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6000e6e6 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.389257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.389315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.389328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.389389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.389403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.149 [2024-05-15 10:58:18.389458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.149 [2024-05-15 10:58:18.389471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.149 #43 NEW cov: 12074 ft: 14060 corp: 18/511b lim: 35 exec/s: 43 rss: 71Mb L: 30/35 MS: 4 InsertRepeatedBytes-CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:21.409 [2024-05-15 10:58:18.429340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00220000 cdw11:2e2e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.429366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.429442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.429460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.429515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.429528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.429583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.429596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.409 #44 NEW cov: 12074 ft: 14098 corp: 19/545b lim: 35 exec/s: 44 rss: 72Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:21.409 [2024-05-15 10:58:18.479484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00220000 cdw11:2e2e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.479509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.479567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.479580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.479635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.479648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.479703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.479717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.409 #45 NEW cov: 12074 ft: 14107 corp: 20/579b lim: 35 exec/s: 45 rss: 72Mb L: 34/35 MS: 1 ChangeByte- 00:06:21.409 [2024-05-15 10:58:18.529805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.529831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.529904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.529918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.529975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:d1d12edb cdw11:d12e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.529989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.530046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.530060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.530118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.530132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.409 #46 NEW cov: 12074 ft: 14134 corp: 21/614b lim: 35 exec/s: 46 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:06:21.409 [2024-05-15 10:58:18.579433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.579459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.579518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.579532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.409 #47 NEW cov: 12074 ft: 14374 corp: 22/633b lim: 35 exec/s: 47 rss: 72Mb L: 19/35 MS: 1 EraseBytes- 00:06:21.409 [2024-05-15 10:58:18.619921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.619948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.620007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.620021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.620076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.620090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.620143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a12e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.620156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.620214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.620226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.409 #48 NEW cov: 12074 ft: 14398 corp: 23/668b lim: 35 exec/s: 48 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:06:21.409 [2024-05-15 10:58:18.659855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.659881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.659937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2ecf cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.659950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.409 [2024-05-15 10:58:18.660008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2c2e2e2e cdw11:2e480000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.409 [2024-05-15 10:58:18.660021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.668 #49 NEW cov: 12074 ft: 14419 corp: 24/690b lim: 35 exec/s: 49 rss: 72Mb L: 22/35 MS: 1 ChangeByte- 00:06:21.669 [2024-05-15 10:58:18.710206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.710235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.710308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.710322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.710383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.710397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.710453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.710466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.669 #50 NEW cov: 12074 ft: 14446 corp: 25/724b lim: 35 exec/s: 50 rss: 72Mb L: 34/35 MS: 1 ChangeByte- 00:06:21.669 [2024-05-15 10:58:18.749914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.749940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.750012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.750027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.669 #51 NEW cov: 12074 ft: 14478 corp: 26/738b lim: 35 exec/s: 51 rss: 72Mb L: 14/35 MS: 1 CrossOver- 00:06:21.669 [2024-05-15 10:58:18.790371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c2c2cdc2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.790402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.790458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.790472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.790525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.790538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.790593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.790606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.669 #55 NEW cov: 12074 ft: 14489 corp: 27/767b lim: 35 exec/s: 55 rss: 72Mb L: 29/35 MS: 4 ChangeByte-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:21.669 [2024-05-15 10:58:18.830628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.830654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.830712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.830729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.830784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.830797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.830851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a12e2e2e cdw11:2ed20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.830864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.830921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.830934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.880750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.880774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.880846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.880860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.880917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2ed2a12e cdw11:d5300000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.880930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.880986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a12e2e2e cdw11:2ed20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.881000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.881054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.881066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.669 #57 NEW cov: 12074 ft: 14542 corp: 28/802b lim: 35 exec/s: 57 rss: 72Mb L: 35/35 MS: 2 ChangeBinInt-CopyPart- 00:06:21.669 [2024-05-15 10:58:18.920572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2ee6e6 cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.920597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.920671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60002e2e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.920685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.669 [2024-05-15 10:58:18.920742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.669 [2024-05-15 10:58:18.920755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.929 #58 NEW cov: 12074 ft: 14559 corp: 29/828b lim: 35 exec/s: 58 rss: 72Mb L: 26/35 MS: 1 CrossOver- 00:06:21.929 [2024-05-15 10:58:18.970738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:18.970764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:18.970823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:18.970837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:18.970892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2edc cdw11:802e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:18.970906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.929 #59 NEW cov: 12074 ft: 14645 corp: 30/850b lim: 35 exec/s: 59 rss: 72Mb L: 22/35 MS: 1 ChangeByte- 00:06:21.929 [2024-05-15 10:58:19.010967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.010992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.011062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.011076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.011133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:222e2e00 cdw11:2e990000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.011146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.011202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.011216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.929 #60 NEW cov: 12074 ft: 14651 corp: 31/884b lim: 35 exec/s: 60 rss: 72Mb L: 34/35 MS: 1 CrossOver- 00:06:21.929 [2024-05-15 10:58:19.051083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c2c2cdc2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.051107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.051166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.051180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.051237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c2c2c2c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.051250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.051306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c2c2c2c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.051320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.929 #61 NEW cov: 12074 ft: 14660 corp: 32/918b lim: 35 exec/s: 61 rss: 72Mb L: 34/35 MS: 1 CopyPart- 00:06:21.929 [2024-05-15 10:58:19.101394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.101418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.101502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.101516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.101572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2ed2a12e cdw11:d5300000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.101584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.101640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a12e2e2e cdw11:2ed20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.101653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.101707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.101720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.929 #62 NEW cov: 12074 ft: 14676 corp: 33/953b lim: 35 exec/s: 62 rss: 72Mb L: 35/35 MS: 1 ChangeASCIIInt- 00:06:21.929 [2024-05-15 10:58:19.151215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2ee6e6 cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.151241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.151314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60002e2e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.151328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.929 [2024-05-15 10:58:19.151391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.929 [2024-05-15 10:58:19.151405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.929 #63 NEW cov: 12074 ft: 14698 corp: 34/976b lim: 35 exec/s: 63 rss: 73Mb L: 23/35 MS: 1 EraseBytes- 00:06:22.189 [2024-05-15 10:58:19.201569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.201594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.201652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.201666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.201721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2edc2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.201734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.201794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2edc2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.201807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.189 #64 NEW cov: 12074 ft: 14706 corp: 35/1007b lim: 35 exec/s: 64 rss: 73Mb L: 31/35 MS: 1 CopyPart- 00:06:22.189 [2024-05-15 10:58:19.241481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2c cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.241505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.241579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.241593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.241648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.241662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.189 #65 NEW cov: 12074 ft: 14718 corp: 36/1030b lim: 35 exec/s: 65 rss: 73Mb L: 23/35 MS: 1 InsertByte- 00:06:22.189 [2024-05-15 10:58:19.281882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.281907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.281979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2c2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.281993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.282047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e332e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.282060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.282115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.282128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.282179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e302e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.282192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:22.189 #66 NEW cov: 12074 ft: 14732 corp: 37/1065b lim: 35 exec/s: 66 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:22.189 [2024-05-15 10:58:19.331789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2ee6e6 cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.331815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.331890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:60002e2e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.331904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.331963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.331977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.189 #67 NEW cov: 12074 ft: 14745 corp: 38/1091b lim: 35 exec/s: 67 rss: 73Mb L: 26/35 MS: 1 CopyPart- 00:06:22.189 [2024-05-15 10:58:19.372010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.372036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.372093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.372106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.372163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2edc2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.372177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.189 [2024-05-15 10:58:19.372235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2edc2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.189 [2024-05-15 10:58:19.372249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.189 #68 NEW cov: 12074 ft: 14760 corp: 39/1122b lim: 35 exec/s: 34 rss: 73Mb L: 31/35 MS: 1 ChangeByte- 00:06:22.189 #68 DONE cov: 12074 ft: 14760 corp: 39/1122b lim: 35 exec/s: 34 rss: 73Mb 00:06:22.189 Done 68 runs in 2 second(s) 00:06:22.189 [2024-05-15 10:58:19.401489] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:22.449 10:58:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:22.449 [2024-05-15 10:58:19.568449] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:22.449 [2024-05-15 10:58:19.568520] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401719 ] 00:06:22.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.708 [2024-05-15 10:58:19.819688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.708 [2024-05-15 10:58:19.907268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.708 [2024-05-15 10:58:19.966187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.966 [2024-05-15 10:58:19.982150] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:22.966 [2024-05-15 10:58:19.982595] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:22.966 INFO: Running with entropic power schedule (0xFF, 100). 00:06:22.966 INFO: Seed: 268806985 00:06:22.966 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:22.966 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:22.966 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:22.966 INFO: A corpus is not provided, starting from an empty corpus 00:06:22.966 #2 INITED exec/s: 0 rss: 64Mb 00:06:22.966 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:22.966 This may also happen if the target rejected all inputs we tried so far 00:06:22.966 [2024-05-15 10:58:20.027515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.966 [2024-05-15 10:58:20.027553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.966 [2024-05-15 10:58:20.027590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.967 [2024-05-15 10:58:20.027608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.967 [2024-05-15 10:58:20.027640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.967 [2024-05-15 10:58:20.027657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.967 [2024-05-15 10:58:20.027688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.967 [2024-05-15 10:58:20.027705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.226 NEW_FUNC[1/686]: 0x48a020 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:23.226 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:23.226 #6 NEW cov: 11841 ft: 11833 corp: 2/40b lim: 45 exec/s: 0 rss: 70Mb L: 39/39 MS: 4 CopyPart-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:06:23.226 [2024-05-15 10:58:20.368203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.368253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.226 [2024-05-15 10:58:20.368301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.368317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.226 [2024-05-15 10:58:20.368345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.368360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.226 [2024-05-15 10:58:20.368395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.368411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.226 #7 NEW cov: 11971 ft: 12330 corp: 3/79b lim: 45 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 ChangeByte- 00:06:23.226 [2024-05-15 10:58:20.438244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.438279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.226 [2024-05-15 10:58:20.438326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.438342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.226 [2024-05-15 10:58:20.438370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.438391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.226 [2024-05-15 10:58:20.438419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.226 [2024-05-15 10:58:20.438434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.226 #8 NEW cov: 11977 ft: 12557 corp: 4/122b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 CopyPart- 00:06:23.486 [2024-05-15 10:58:20.508390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.508418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.508465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.508481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.508508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.508524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.508551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.508566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.486 #9 NEW cov: 12062 ft: 12836 corp: 5/165b lim: 45 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 ShuffleBytes- 00:06:23.486 [2024-05-15 10:58:20.578635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.578664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.578711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.578726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.578754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.578769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.578796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.578811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.486 #10 NEW cov: 12062 ft: 12937 corp: 6/208b lim: 45 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 ChangeByte- 00:06:23.486 [2024-05-15 10:58:20.628739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.628769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.628815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.628831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.628858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.628874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.628901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.628916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.486 #11 NEW cov: 12062 ft: 13004 corp: 7/251b lim: 45 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 ShuffleBytes- 00:06:23.486 [2024-05-15 10:58:20.698962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.698993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.699026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.699042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.699071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.699086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.699119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.699134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.486 #12 NEW cov: 12062 ft: 13136 corp: 8/295b lim: 45 exec/s: 0 rss: 71Mb L: 44/44 MS: 1 CrossOver- 00:06:23.486 [2024-05-15 10:58:20.749066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.749097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.749130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.749146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.486 [2024-05-15 10:58:20.749175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.486 [2024-05-15 10:58:20.749190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.745 #13 NEW cov: 12062 ft: 13592 corp: 9/324b lim: 45 exec/s: 0 rss: 71Mb L: 29/44 MS: 1 InsertRepeatedBytes- 00:06:23.745 [2024-05-15 10:58:20.809168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.809197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.809243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.809259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.809287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.809302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.809330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a53 cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.809345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.745 #14 NEW cov: 12062 ft: 13686 corp: 10/363b lim: 45 exec/s: 0 rss: 71Mb L: 39/44 MS: 1 ChangeBinInt- 00:06:23.745 [2024-05-15 10:58:20.859339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.859368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.859407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.859422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.859450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.859466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.859497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a9f cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.859511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.745 #15 NEW cov: 12062 ft: 13787 corp: 11/402b lim: 45 exec/s: 0 rss: 71Mb L: 39/44 MS: 1 ChangeBinInt- 00:06:23.745 [2024-05-15 10:58:20.909475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.909504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.909550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.909565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.909593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.909608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.909635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.909649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.745 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:23.745 #16 NEW cov: 12085 ft: 13842 corp: 12/445b lim: 45 exec/s: 0 rss: 71Mb L: 43/44 MS: 1 ShuffleBytes- 00:06:23.745 [2024-05-15 10:58:20.959544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.959573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.959619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.959634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:20.959663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:20.959678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.745 #17 NEW cov: 12085 ft: 13868 corp: 13/478b lim: 45 exec/s: 0 rss: 71Mb L: 33/44 MS: 1 EraseBytes- 00:06:23.745 [2024-05-15 10:58:21.009783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:21.009813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:21.009845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:21.009861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:21.009890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:21.009905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.745 [2024-05-15 10:58:21.009937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5ada0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.745 [2024-05-15 10:58:21.009953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.004 #18 NEW cov: 12085 ft: 13888 corp: 14/521b lim: 45 exec/s: 18 rss: 71Mb L: 43/44 MS: 1 ChangeBit- 00:06:24.004 [2024-05-15 10:58:21.079943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.079972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.080019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.080034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.080062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.080077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.080104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.080119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.080146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.080160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.004 #19 NEW cov: 12085 ft: 13973 corp: 15/566b lim: 45 exec/s: 19 rss: 71Mb L: 45/45 MS: 1 CrossOver- 00:06:24.004 [2024-05-15 10:58:21.150035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.150063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.150110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.150126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.150153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.150168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.004 #20 NEW cov: 12085 ft: 13981 corp: 16/601b lim: 45 exec/s: 20 rss: 71Mb L: 35/45 MS: 1 CopyPart- 00:06:24.004 [2024-05-15 10:58:21.220157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.220186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.004 [2024-05-15 10:58:21.220232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.004 [2024-05-15 10:58:21.220247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.262 #21 NEW cov: 12085 ft: 14278 corp: 17/623b lim: 45 exec/s: 21 rss: 72Mb L: 22/45 MS: 1 EraseBytes- 00:06:24.262 [2024-05-15 10:58:21.290535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.290564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.290610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.290625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.290653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.290668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.290695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.290710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.290737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:5a5a5a5a cdw11:5a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.290751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.262 #22 NEW cov: 12085 ft: 14292 corp: 18/668b lim: 45 exec/s: 22 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:06:24.262 [2024-05-15 10:58:21.360691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.360720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.360752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.360768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.360795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.360810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.262 [2024-05-15 10:58:21.360837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.262 [2024-05-15 10:58:21.360852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.262 #23 NEW cov: 12085 ft: 14309 corp: 19/711b lim: 45 exec/s: 23 rss: 72Mb L: 43/45 MS: 1 ChangeBinInt- 00:06:24.262 [2024-05-15 10:58:21.410781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.410811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.410857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.410872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.410904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.410919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.410946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.410961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.263 #24 NEW cov: 12085 ft: 14347 corp: 20/754b lim: 45 exec/s: 24 rss: 72Mb L: 43/45 MS: 1 ShuffleBytes- 00:06:24.263 [2024-05-15 10:58:21.460985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.461015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.461049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.461065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.461093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.461108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.461136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.461151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.263 #25 NEW cov: 12085 ft: 14419 corp: 21/797b lim: 45 exec/s: 25 rss: 72Mb L: 43/45 MS: 1 CopyPart- 00:06:24.263 [2024-05-15 10:58:21.510987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.511016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.511062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.511078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.263 [2024-05-15 10:58:21.511106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.263 [2024-05-15 10:58:21.511120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.521 #26 NEW cov: 12085 ft: 14446 corp: 22/826b lim: 45 exec/s: 26 rss: 72Mb L: 29/45 MS: 1 ShuffleBytes- 00:06:24.521 [2024-05-15 10:58:21.561230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.561259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.561305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.561320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.561353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.561368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.561402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.561417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.561444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:5a0a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.561458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.521 #27 NEW cov: 12085 ft: 14481 corp: 23/871b lim: 45 exec/s: 27 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:06:24.521 [2024-05-15 10:58:21.631299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.631328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.631375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.631398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.631426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.631441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.521 #29 NEW cov: 12085 ft: 14501 corp: 24/906b lim: 45 exec/s: 29 rss: 72Mb L: 35/45 MS: 2 ChangeBit-CrossOver- 00:06:24.521 [2024-05-15 10:58:21.681395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a0d cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.681424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.681471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.681486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.681513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.681528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.521 #30 NEW cov: 12085 ft: 14530 corp: 25/939b lim: 45 exec/s: 30 rss: 72Mb L: 33/45 MS: 1 ChangeByte- 00:06:24.521 [2024-05-15 10:58:21.751551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.751581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.521 [2024-05-15 10:58:21.751627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:16000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.521 [2024-05-15 10:58:21.751642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.781 #31 NEW cov: 12085 ft: 14543 corp: 26/961b lim: 45 exec/s: 31 rss: 72Mb L: 22/45 MS: 1 ChangeBinInt- 00:06:24.781 [2024-05-15 10:58:21.811857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.811887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.811934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5ac75a cdw11:ad5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.811949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.811977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.811992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.812019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.812034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.812061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.812075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.781 #32 NEW cov: 12085 ft: 14547 corp: 27/1006b lim: 45 exec/s: 32 rss: 72Mb L: 45/45 MS: 1 ChangeBinInt- 00:06:24.781 [2024-05-15 10:58:21.861872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.861901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.861947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.861962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.861990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.862005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.781 #38 NEW cov: 12085 ft: 14553 corp: 28/1036b lim: 45 exec/s: 38 rss: 72Mb L: 30/45 MS: 1 InsertByte- 00:06:24.781 [2024-05-15 10:58:21.912053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5f5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.912083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.912129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:c75a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.912144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.912172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.912187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.912218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.912233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.781 #39 NEW cov: 12085 ft: 14577 corp: 29/1079b lim: 45 exec/s: 39 rss: 72Mb L: 43/45 MS: 1 ChangeBinInt- 00:06:24.781 [2024-05-15 10:58:21.982217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.982246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.982293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.982308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.781 [2024-05-15 10:58:21.982336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5ac70002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.781 [2024-05-15 10:58:21.982351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.781 #40 NEW cov: 12085 ft: 14579 corp: 30/1114b lim: 45 exec/s: 20 rss: 72Mb L: 35/45 MS: 1 ShuffleBytes- 00:06:24.781 #40 DONE cov: 12085 ft: 14579 corp: 30/1114b lim: 45 exec/s: 20 rss: 72Mb 00:06:24.781 Done 40 runs in 2 second(s) 00:06:24.781 [2024-05-15 10:58:22.034476] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:25.040 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:25.040 10:58:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:25.040 10:58:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:25.041 10:58:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:25.041 [2024-05-15 10:58:22.201593] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:25.041 [2024-05-15 10:58:22.201664] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402259 ] 00:06:25.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.299 [2024-05-15 10:58:22.459819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.299 [2024-05-15 10:58:22.552533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.557 [2024-05-15 10:58:22.612042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.557 [2024-05-15 10:58:22.628000] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:25.557 [2024-05-15 10:58:22.628423] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:25.557 INFO: Running with entropic power schedule (0xFF, 100). 00:06:25.557 INFO: Seed: 2912802955 00:06:25.557 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:25.557 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:25.557 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:25.557 INFO: A corpus is not provided, starting from an empty corpus 00:06:25.557 #2 INITED exec/s: 0 rss: 63Mb 00:06:25.557 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:25.557 This may also happen if the target rejected all inputs we tried so far 00:06:25.557 [2024-05-15 10:58:22.677205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aaf cdw11:00000000 00:06:25.557 [2024-05-15 10:58:22.677232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.557 [2024-05-15 10:58:22.677289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fe9a cdw11:00000000 00:06:25.558 [2024-05-15 10:58:22.677303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.558 [2024-05-15 10:58:22.677357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001838 cdw11:00000000 00:06:25.558 [2024-05-15 10:58:22.677371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.558 [2024-05-15 10:58:22.677428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:25.558 [2024-05-15 10:58:22.677441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.817 NEW_FUNC[1/684]: 0x48c830 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:25.817 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:25.817 #4 NEW cov: 11755 ft: 11756 corp: 2/10b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 2 ShuffleBytes-CMP- DE: "\257\376\232\0308\373\205\000"- 00:06:25.817 [2024-05-15 10:58:23.007935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:25.817 [2024-05-15 10:58:23.007967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.817 [2024-05-15 10:58:23.008019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:25.817 [2024-05-15 10:58:23.008033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.817 [2024-05-15 10:58:23.008086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001838 cdw11:00000000 00:06:25.817 [2024-05-15 10:58:23.008099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.817 [2024-05-15 10:58:23.008150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:25.817 [2024-05-15 10:58:23.008162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.817 #5 NEW cov: 11888 ft: 12249 corp: 3/19b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CopyPart- 00:06:25.817 [2024-05-15 10:58:23.057657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ab2 cdw11:00000000 00:06:25.817 [2024-05-15 10:58:23.057683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.817 #7 NEW cov: 11894 ft: 12951 corp: 4/21b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 2 ShuffleBytes-InsertByte- 00:06:26.086 [2024-05-15 10:58:23.098089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a01 cdw11:00000000 00:06:26.086 [2024-05-15 10:58:23.098115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.098166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.098180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.098230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.098243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.098295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.098307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.087 #9 NEW cov: 11979 ft: 13211 corp: 5/30b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ChangeByte-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:26.087 [2024-05-15 10:58:23.138016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.138041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.138106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.138120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.087 #11 NEW cov: 11979 ft: 13464 corp: 6/35b lim: 10 exec/s: 0 rss: 71Mb L: 5/9 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:26.087 [2024-05-15 10:58:23.178498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.178523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.178573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.178586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.178636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.178648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.178702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.178714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.178765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.178777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.087 #12 NEW cov: 11979 ft: 13598 corp: 7/45b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CrossOver- 00:06:26.087 [2024-05-15 10:58:23.228477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.228503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.228553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003818 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.228566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.228614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000085fb cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.228627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.228676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.228688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.087 #13 NEW cov: 11979 ft: 13656 corp: 8/54b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:26.087 [2024-05-15 10:58:23.268694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.268719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.268769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.268782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.268832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.268845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.268910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.268923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.087 [2024-05-15 10:58:23.268973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.268986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.087 #14 NEW cov: 11979 ft: 13701 corp: 9/64b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:06:26.087 [2024-05-15 10:58:23.318421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eb2 cdw11:00000000 00:06:26.087 [2024-05-15 10:58:23.318446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.087 #15 NEW cov: 11979 ft: 13765 corp: 10/66b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ChangeBit- 00:06:26.350 [2024-05-15 10:58:23.368909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.368934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.368986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.368999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.369049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.369062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.369111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.369123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.350 #16 NEW cov: 11979 ft: 13807 corp: 11/74b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 1 EraseBytes- 00:06:26.350 [2024-05-15 10:58:23.408966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.408990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.409039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.409052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.409101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.409114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.409163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.409176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.350 #18 NEW cov: 11979 ft: 13846 corp: 12/82b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:26.350 [2024-05-15 10:58:23.449116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.449141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.449193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.449206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.449255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000036fb cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.449268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.449319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.449331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.350 #19 NEW cov: 11979 ft: 13851 corp: 13/90b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 ChangeASCIIInt- 00:06:26.350 [2024-05-15 10:58:23.499343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.499371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.499426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003818 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.499439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.499489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008585 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.499502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.499548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000018fb cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.499560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.499610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.499623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.350 #20 NEW cov: 11979 ft: 13877 corp: 14/100b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:26.350 [2024-05-15 10:58:23.549529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.549553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.549604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003818 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.549617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.549669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008585 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.549698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.549751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.549764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.549814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00001800 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.549827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.350 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:26.350 #21 NEW cov: 12002 ft: 13921 corp: 15/110b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:26.350 [2024-05-15 10:58:23.599594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.599619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.599669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.599681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.599731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.599747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.599795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.599807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.350 [2024-05-15 10:58:23.599856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.350 [2024-05-15 10:58:23.599869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.611 #22 NEW cov: 12002 ft: 13951 corp: 16/120b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:26.611 [2024-05-15 10:58:23.639709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.639734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.639785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.639799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.639847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000f85 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.639860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.639907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.639919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.639967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.639979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.611 #23 NEW cov: 12002 ft: 13967 corp: 17/130b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:26.611 [2024-05-15 10:58:23.679706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a01 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.679730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.679782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.679795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.679844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.679857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.679908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.679920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.611 #24 NEW cov: 12002 ft: 13990 corp: 18/139b lim: 10 exec/s: 24 rss: 72Mb L: 9/10 MS: 1 ChangeBit- 00:06:26.611 [2024-05-15 10:58:23.729863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.729887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.729959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.729973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.730026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.730039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.730090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.730104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.611 #25 NEW cov: 12002 ft: 13992 corp: 19/147b lim: 10 exec/s: 25 rss: 72Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:26.611 [2024-05-15 10:58:23.780048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.780074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.780123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.780136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.780183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002b82 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.780196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.780244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008175 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.780256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.611 #26 NEW cov: 12002 ft: 14015 corp: 20/155b lim: 10 exec/s: 26 rss: 72Mb L: 8/10 MS: 1 CMP- DE: "\000\000\000\000+\202\201u"- 00:06:26.611 [2024-05-15 10:58:23.819920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.819944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.611 [2024-05-15 10:58:23.819995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.611 [2024-05-15 10:58:23.820008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.611 #27 NEW cov: 12002 ft: 14038 corp: 21/160b lim: 10 exec/s: 27 rss: 72Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:26.611 [2024-05-15 10:58:23.870297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000028 cdw11:00000000 00:06:26.612 [2024-05-15 10:58:23.870321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.612 [2024-05-15 10:58:23.870391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.612 [2024-05-15 10:58:23.870404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.612 [2024-05-15 10:58:23.870457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.612 [2024-05-15 10:58:23.870470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.612 [2024-05-15 10:58:23.870523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.612 [2024-05-15 10:58:23.870536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.871 #28 NEW cov: 12002 ft: 14062 corp: 22/168b lim: 10 exec/s: 28 rss: 72Mb L: 8/10 MS: 1 ChangeByte- 00:06:26.871 [2024-05-15 10:58:23.920516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.920541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.920592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.920604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.920653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000188b cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.920666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.920716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.920729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.920777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.920790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.871 #29 NEW cov: 12002 ft: 14073 corp: 23/178b lim: 10 exec/s: 29 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:26.871 [2024-05-15 10:58:23.960637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001831 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.960662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.960713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.960726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.960773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000f85 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.960786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.960835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000038fb cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.960847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:23.960896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:23.960908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.871 #30 NEW cov: 12002 ft: 14094 corp: 24/188b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:26.871 [2024-05-15 10:58:24.010353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000055b2 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.010378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.871 #31 NEW cov: 12002 ft: 14160 corp: 25/190b lim: 10 exec/s: 31 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:06:26.871 [2024-05-15 10:58:24.050835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002afb cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.050859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.050926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.050940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.050991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.051004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.051063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.051076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.871 #32 NEW cov: 12002 ft: 14191 corp: 26/199b lim: 10 exec/s: 32 rss: 73Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:26.871 [2024-05-15 10:58:24.101021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.101045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.101096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.101109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.101157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001885 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.101171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.101219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000034fb cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.101231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.871 [2024-05-15 10:58:24.101278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:26.871 [2024-05-15 10:58:24.101291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.871 #33 NEW cov: 12002 ft: 14197 corp: 27/209b lim: 10 exec/s: 33 rss: 73Mb L: 10/10 MS: 1 ChangeASCIIInt- 00:06:27.131 [2024-05-15 10:58:24.141131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.141156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.141208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.141221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.141272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000f32 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.141285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.141345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000038fb cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.141361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.141417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.141429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.131 #34 NEW cov: 12002 ft: 14209 corp: 28/219b lim: 10 exec/s: 34 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:27.131 [2024-05-15 10:58:24.181149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.181173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.181226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.181239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.181290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002b82 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.181303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.181352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008175 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.181365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.131 #35 NEW cov: 12002 ft: 14253 corp: 29/227b lim: 10 exec/s: 35 rss: 73Mb L: 8/10 MS: 1 ShuffleBytes- 00:06:27.131 [2024-05-15 10:58:24.231244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.231268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.231321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.231334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.231389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.231402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.231452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.231465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.131 #36 NEW cov: 12002 ft: 14273 corp: 30/236b lim: 10 exec/s: 36 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:27.131 [2024-05-15 10:58:24.271176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.271202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.271253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008500 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.271267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.131 #37 NEW cov: 12002 ft: 14296 corp: 31/241b lim: 10 exec/s: 37 rss: 73Mb L: 5/10 MS: 1 CrossOver- 00:06:27.131 [2024-05-15 10:58:24.311542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.311572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.311622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.311635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.311684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000fb cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.311697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.311746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.311758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.131 #38 NEW cov: 12002 ft: 14312 corp: 32/250b lim: 10 exec/s: 38 rss: 73Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:27.131 [2024-05-15 10:58:24.361749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001885 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.361774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.361825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001885 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.361837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.361889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001885 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.361918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.361971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000034fb cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.361984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.131 [2024-05-15 10:58:24.362032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:27.131 [2024-05-15 10:58:24.362045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.131 #39 NEW cov: 12002 ft: 14373 corp: 33/260b lim: 10 exec/s: 39 rss: 73Mb L: 10/10 MS: 1 ChangeASCIIInt- 00:06:27.391 [2024-05-15 10:58:24.401738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.391 [2024-05-15 10:58:24.401763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.391 [2024-05-15 10:58:24.401810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.401823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.401873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000162b cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.401885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.401935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008281 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.401948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.392 #40 NEW cov: 12002 ft: 14378 corp: 34/269b lim: 10 exec/s: 40 rss: 73Mb L: 9/10 MS: 1 InsertByte- 00:06:27.392 [2024-05-15 10:58:24.451987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001801 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.452012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.452063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.452076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.452127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.452140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.452189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.452202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.452251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000003fb cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.452264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.392 #41 NEW cov: 12002 ft: 14384 corp: 35/279b lim: 10 exec/s: 41 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\003"- 00:06:27.392 [2024-05-15 10:58:24.502122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.502147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.502198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.502212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.502264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001885 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.502278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.502328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000032fb cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.502341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.502394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.502407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.542237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001838 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.542262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.542313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb85 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.542326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.542375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e881 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.542394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.542448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000032fb cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.542461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.542511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008500 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.542524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.392 #43 NEW cov: 12002 ft: 14389 corp: 36/289b lim: 10 exec/s: 43 rss: 74Mb L: 10/10 MS: 2 ChangeASCIIInt-ChangeBinInt- 00:06:27.392 [2024-05-15 10:58:24.582332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001885 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.582357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.582409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003818 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.582422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.582471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008585 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.582484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.582534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008538 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.582546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.582594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00001800 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.582607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.392 #44 NEW cov: 12002 ft: 14428 corp: 37/299b lim: 10 exec/s: 44 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:27.392 [2024-05-15 10:58:24.632466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.632491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.632542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.632556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.632608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.632637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.632687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.632700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.392 [2024-05-15 10:58:24.632751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.392 [2024-05-15 10:58:24.632764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.392 #45 NEW cov: 12002 ft: 14430 corp: 38/309b lim: 10 exec/s: 45 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:27.652 [2024-05-15 10:58:24.672550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:27.652 [2024-05-15 10:58:24.672576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.652 [2024-05-15 10:58:24.672627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.652 [2024-05-15 10:58:24.672640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.652 [2024-05-15 10:58:24.672689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:06:27.652 [2024-05-15 10:58:24.672702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.652 [2024-05-15 10:58:24.672755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.652 [2024-05-15 10:58:24.672767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.652 #46 NEW cov: 12002 ft: 14436 corp: 39/318b lim: 10 exec/s: 23 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:27.652 #46 DONE cov: 12002 ft: 14436 corp: 39/318b lim: 10 exec/s: 23 rss: 74Mb 00:06:27.652 ###### Recommended dictionary. ###### 00:06:27.652 "\257\376\232\0308\373\205\000" # Uses: 0 00:06:27.652 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:27.652 "\000\000\000\000+\202\201u" # Uses: 0 00:06:27.652 "\001\000\000\000\000\000\000\003" # Uses: 0 00:06:27.652 ###### End of recommended dictionary. ###### 00:06:27.652 Done 46 runs in 2 second(s) 00:06:27.652 [2024-05-15 10:58:24.692559] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:27.652 10:58:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:27.652 [2024-05-15 10:58:24.861004] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:27.652 [2024-05-15 10:58:24.861107] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402701 ] 00:06:27.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.911 [2024-05-15 10:58:25.108856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.170 [2024-05-15 10:58:25.201113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.170 [2024-05-15 10:58:25.260401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.170 [2024-05-15 10:58:25.276350] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:28.170 [2024-05-15 10:58:25.276772] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:28.170 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.170 INFO: Seed: 1266838124 00:06:28.170 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:28.170 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:28.170 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:28.170 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.170 #2 INITED exec/s: 0 rss: 63Mb 00:06:28.170 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.170 This may also happen if the target rejected all inputs we tried so far 00:06:28.170 [2024-05-15 10:58:25.325927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:28.170 [2024-05-15 10:58:25.325955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.430 NEW_FUNC[1/684]: 0x48d220 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:28.430 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:28.430 #4 NEW cov: 11758 ft: 11759 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 CopyPart-CopyPart- 00:06:28.430 [2024-05-15 10:58:25.656623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:28.430 [2024-05-15 10:58:25.656655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.430 #5 NEW cov: 11888 ft: 12173 corp: 3/5b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:06:28.689 [2024-05-15 10:58:25.707121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.707147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.707199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.707213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.707263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.707276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.707327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.707340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.707398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000140a cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.707411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.689 #6 NEW cov: 11894 ft: 12791 corp: 4/15b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:28.689 [2024-05-15 10:58:25.746763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.746787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.689 #7 NEW cov: 11979 ft: 13206 corp: 5/18b lim: 10 exec/s: 0 rss: 71Mb L: 3/10 MS: 1 InsertByte- 00:06:28.689 [2024-05-15 10:58:25.786947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a4d cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.786973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.689 #8 NEW cov: 11979 ft: 13309 corp: 6/21b lim: 10 exec/s: 0 rss: 71Mb L: 3/10 MS: 1 InsertByte- 00:06:28.689 [2024-05-15 10:58:25.837485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.837511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.837579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.837594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.837646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.837659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.837711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.837724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.689 [2024-05-15 10:58:25.837776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000141a cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.837789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.689 #9 NEW cov: 11979 ft: 13360 corp: 7/31b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeBit- 00:06:28.689 [2024-05-15 10:58:25.887191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a29 cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.887216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.689 #10 NEW cov: 11979 ft: 13445 corp: 8/34b lim: 10 exec/s: 0 rss: 71Mb L: 3/10 MS: 1 InsertByte- 00:06:28.689 [2024-05-15 10:58:25.927286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000aa5d cdw11:00000000 00:06:28.689 [2024-05-15 10:58:25.927311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 #11 NEW cov: 11979 ft: 13556 corp: 9/37b lim: 10 exec/s: 0 rss: 71Mb L: 3/10 MS: 1 ChangeByte- 00:06:28.949 [2024-05-15 10:58:25.977443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a5d cdw11:00000000 00:06:28.949 [2024-05-15 10:58:25.977468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 #12 NEW cov: 11979 ft: 13571 corp: 10/40b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 ChangeBit- 00:06:28.949 [2024-05-15 10:58:26.027629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.027656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 #13 NEW cov: 11979 ft: 13611 corp: 11/42b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:28.949 [2024-05-15 10:58:26.068196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.068221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 [2024-05-15 10:58:26.068273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.068287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.949 [2024-05-15 10:58:26.068337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.068350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.949 [2024-05-15 10:58:26.068415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.068429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.949 [2024-05-15 10:58:26.068483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000150a cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.068496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.949 #14 NEW cov: 11979 ft: 13639 corp: 12/52b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:06:28.949 [2024-05-15 10:58:26.107828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a4d cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.107853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 #15 NEW cov: 11979 ft: 13652 corp: 13/55b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 ShuffleBytes- 00:06:28.949 [2024-05-15 10:58:26.147978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a23 cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.148003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 #17 NEW cov: 11979 ft: 13688 corp: 14/57b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:06:28.949 [2024-05-15 10:58:26.188037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000aa0a cdw11:00000000 00:06:28.949 [2024-05-15 10:58:26.188062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.949 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:28.949 #18 NEW cov: 12002 ft: 13717 corp: 15/60b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 CrossOver- 00:06:29.208 [2024-05-15 10:58:26.228196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cc0a cdw11:00000000 00:06:29.208 [2024-05-15 10:58:26.228222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.208 #19 NEW cov: 12002 ft: 13764 corp: 16/62b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:06:29.208 [2024-05-15 10:58:26.268766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.208 [2024-05-15 10:58:26.268792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.208 [2024-05-15 10:58:26.268849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.208 [2024-05-15 10:58:26.268863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.208 [2024-05-15 10:58:26.268914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.208 [2024-05-15 10:58:26.268927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.208 [2024-05-15 10:58:26.268978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.268992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.269043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000141a cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.269056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.209 #20 NEW cov: 12002 ft: 13770 corp: 17/72b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:29.209 [2024-05-15 10:58:26.318422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000aa1a cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.318448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.209 #21 NEW cov: 12002 ft: 13813 corp: 18/75b lim: 10 exec/s: 21 rss: 72Mb L: 3/10 MS: 1 ChangeBit- 00:06:29.209 [2024-05-15 10:58:26.369092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.369117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.369169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.369182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.369232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.369245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.369298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.369311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.369360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000140a cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.369373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.209 #22 NEW cov: 12002 ft: 13823 corp: 19/85b lim: 10 exec/s: 22 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:29.209 [2024-05-15 10:58:26.409121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.409147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.409198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.409212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.409261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.409278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.409330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.409343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.209 [2024-05-15 10:58:26.409398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000150a cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.409412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.209 #23 NEW cov: 12002 ft: 13846 corp: 20/95b lim: 10 exec/s: 23 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:29.209 [2024-05-15 10:58:26.458817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a0a cdw11:00000000 00:06:29.209 [2024-05-15 10:58:26.458842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.468 #24 NEW cov: 12002 ft: 13852 corp: 21/97b lim: 10 exec/s: 24 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:06:29.468 [2024-05-15 10:58:26.498944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000aa00 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.498970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.468 #25 NEW cov: 12002 ft: 13863 corp: 22/100b lim: 10 exec/s: 25 rss: 72Mb L: 3/10 MS: 1 ChangeBinInt- 00:06:29.468 [2024-05-15 10:58:26.539114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000aa00 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.539140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.468 [2024-05-15 10:58:26.539192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001403 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.539205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.468 #26 NEW cov: 12002 ft: 14050 corp: 23/105b lim: 10 exec/s: 26 rss: 72Mb L: 5/10 MS: 1 CrossOver- 00:06:29.468 [2024-05-15 10:58:26.589669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.589694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.468 [2024-05-15 10:58:26.589748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.589762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.468 [2024-05-15 10:58:26.589814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.589827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.468 [2024-05-15 10:58:26.589879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.589892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.468 [2024-05-15 10:58:26.589944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000140a cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.589958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.468 #27 NEW cov: 12002 ft: 14059 corp: 24/115b lim: 10 exec/s: 27 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:29.468 [2024-05-15 10:58:26.639359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000601a cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.639389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.468 #28 NEW cov: 12002 ft: 14105 corp: 25/118b lim: 10 exec/s: 28 rss: 73Mb L: 3/10 MS: 1 ChangeByte- 00:06:29.468 [2024-05-15 10:58:26.679465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b30a cdw11:00000000 00:06:29.468 [2024-05-15 10:58:26.679490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.468 #29 NEW cov: 12002 ft: 14125 corp: 26/121b lim: 10 exec/s: 29 rss: 73Mb L: 3/10 MS: 1 ChangeBinInt- 00:06:29.469 [2024-05-15 10:58:26.719619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.469 [2024-05-15 10:58:26.719644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.728 #30 NEW cov: 12002 ft: 14131 corp: 27/124b lim: 10 exec/s: 30 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:29.728 [2024-05-15 10:58:26.759653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.759677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.728 #31 NEW cov: 12002 ft: 14143 corp: 28/127b lim: 10 exec/s: 31 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:06:29.728 [2024-05-15 10:58:26.800198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.800222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.800273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.800286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.800337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.800350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.800405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff2e cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.800418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.728 #33 NEW cov: 12002 ft: 14171 corp: 29/135b lim: 10 exec/s: 33 rss: 73Mb L: 8/10 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:29.728 [2024-05-15 10:58:26.839914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a23 cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.839941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.728 #34 NEW cov: 12002 ft: 14204 corp: 30/137b lim: 10 exec/s: 34 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:29.728 [2024-05-15 10:58:26.890607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001c14 cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.890632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.890684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.890696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.890746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.890762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.890811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.890824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.728 [2024-05-15 10:58:26.890875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000150a cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.890888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.728 #35 NEW cov: 12002 ft: 14230 corp: 31/147b lim: 10 exec/s: 35 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:29.728 [2024-05-15 10:58:26.930162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.930187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.728 #36 NEW cov: 12002 ft: 14302 corp: 32/150b lim: 10 exec/s: 36 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:29.728 [2024-05-15 10:58:26.970298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a14 cdw11:00000000 00:06:29.728 [2024-05-15 10:58:26.970323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 #37 NEW cov: 12002 ft: 14314 corp: 33/152b lim: 10 exec/s: 37 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:06:29.987 [2024-05-15 10:58:27.020424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001414 cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.020448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 #38 NEW cov: 12002 ft: 14348 corp: 34/155b lim: 10 exec/s: 38 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:06:29.987 [2024-05-15 10:58:27.070887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.070912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.070979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.070993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.071044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.071057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.071108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.071121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.987 #41 NEW cov: 12002 ft: 14356 corp: 35/163b lim: 10 exec/s: 41 rss: 73Mb L: 8/10 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:06:29.987 [2024-05-15 10:58:27.110903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.110928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.110981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.110994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.111049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.111062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.987 #42 NEW cov: 12002 ft: 14494 corp: 36/170b lim: 10 exec/s: 42 rss: 73Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:29.987 [2024-05-15 10:58:27.161166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.161192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.161243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.161256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.161306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.161319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.161368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff34 cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.161385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.987 #43 NEW cov: 12002 ft: 14506 corp: 37/178b lim: 10 exec/s: 43 rss: 74Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:29.987 [2024-05-15 10:58:27.211082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.211107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.211159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.211173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.987 #44 NEW cov: 12002 ft: 14536 corp: 38/183b lim: 10 exec/s: 44 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:06:29.987 [2024-05-15 10:58:27.251491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.251516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.987 [2024-05-15 10:58:27.251566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.987 [2024-05-15 10:58:27.251580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.988 [2024-05-15 10:58:27.251630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000230a cdw11:00000000 00:06:29.988 [2024-05-15 10:58:27.251644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.988 [2024-05-15 10:58:27.251695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005d0a cdw11:00000000 00:06:29.988 [2024-05-15 10:58:27.251708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.248 #45 NEW cov: 12002 ft: 14540 corp: 39/191b lim: 10 exec/s: 45 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:06:30.248 [2024-05-15 10:58:27.301717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001454 cdw11:00000000 00:06:30.248 [2024-05-15 10:58:27.301741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.248 [2024-05-15 10:58:27.301796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001414 cdw11:00000000 00:06:30.248 [2024-05-15 10:58:27.301810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.248 [2024-05-15 10:58:27.301859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001414 cdw11:00000000 00:06:30.248 [2024-05-15 10:58:27.301872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.248 [2024-05-15 10:58:27.301923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001414 cdw11:00000000 00:06:30.248 [2024-05-15 10:58:27.301936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.248 [2024-05-15 10:58:27.301984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000150a cdw11:00000000 00:06:30.248 [2024-05-15 10:58:27.301997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:30.248 #46 NEW cov: 12002 ft: 14552 corp: 40/201b lim: 10 exec/s: 23 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:06:30.248 #46 DONE cov: 12002 ft: 14552 corp: 40/201b lim: 10 exec/s: 23 rss: 74Mb 00:06:30.248 Done 46 runs in 2 second(s) 00:06:30.248 [2024-05-15 10:58:27.331193] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:30.248 10:58:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:30.248 [2024-05-15 10:58:27.499517] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:30.248 [2024-05-15 10:58:27.499609] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403081 ] 00:06:30.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.507 [2024-05-15 10:58:27.755503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.766 [2024-05-15 10:58:27.835467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.766 [2024-05-15 10:58:27.895213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.766 [2024-05-15 10:58:27.911168] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:30.766 [2024-05-15 10:58:27.911615] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:30.766 INFO: Running with entropic power schedule (0xFF, 100). 00:06:30.766 INFO: Seed: 3900840691 00:06:30.766 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:30.766 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:30.766 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:30.766 INFO: A corpus is not provided, starting from an empty corpus 00:06:30.766 [2024-05-15 10:58:27.981697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.766 [2024-05-15 10:58:27.981734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.766 #2 INITED cov: 11781 ft: 11780 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:31.025 [2024-05-15 10:58:28.032931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.025 [2024-05-15 10:58:28.032959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.025 [2024-05-15 10:58:28.033038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.025 [2024-05-15 10:58:28.033055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.025 [2024-05-15 10:58:28.033127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.025 [2024-05-15 10:58:28.033143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.025 [2024-05-15 10:58:28.033214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.025 [2024-05-15 10:58:28.033229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.284 NEW_FUNC[1/1]: 0x15d94e0 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1224 00:06:31.284 #3 NEW cov: 11916 ft: 13251 corp: 2/5b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:31.284 [2024-05-15 10:58:28.363052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.363088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.363234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.363253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.363404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.363420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.363554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.363573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.284 #4 NEW cov: 11922 ft: 13682 corp: 3/9b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:31.284 [2024-05-15 10:58:28.413121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.413149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.413288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.413306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.413444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.413463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.413594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.413612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.284 #5 NEW cov: 12007 ft: 13969 corp: 4/13b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:06:31.284 [2024-05-15 10:58:28.472877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.472905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.473043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.473060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.284 #6 NEW cov: 12007 ft: 14307 corp: 5/15b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 EraseBytes- 00:06:31.284 [2024-05-15 10:58:28.533551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.533580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.533727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.533744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.533882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.533899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.284 [2024-05-15 10:58:28.534039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.284 [2024-05-15 10:58:28.534058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.543 #7 NEW cov: 12007 ft: 14398 corp: 6/19b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeBit- 00:06:31.543 [2024-05-15 10:58:28.583229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.583255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.583406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.583423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.543 #8 NEW cov: 12007 ft: 14450 corp: 7/21b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 CrossOver- 00:06:31.543 [2024-05-15 10:58:28.643950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.643977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.644142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.644159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.644306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.644325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.644470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.644492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.543 #9 NEW cov: 12007 ft: 14477 corp: 8/25b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:06:31.543 [2024-05-15 10:58:28.694243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.694268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.694403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.694421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.694560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.694576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.694708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.694725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.543 [2024-05-15 10:58:28.694870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.543 [2024-05-15 10:58:28.694889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.544 #10 NEW cov: 12007 ft: 14580 corp: 9/30b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:06:31.544 [2024-05-15 10:58:28.754254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.754283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.544 [2024-05-15 10:58:28.754422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.754439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.544 [2024-05-15 10:58:28.754577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.754594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.544 [2024-05-15 10:58:28.754725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.754740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.544 #11 NEW cov: 12007 ft: 14599 corp: 10/34b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:31.544 [2024-05-15 10:58:28.804465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.804505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.544 [2024-05-15 10:58:28.804644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.804662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.544 [2024-05-15 10:58:28.804809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.804827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.544 [2024-05-15 10:58:28.804961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.544 [2024-05-15 10:58:28.804979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.803 #12 NEW cov: 12007 ft: 14622 corp: 11/38b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:31.803 [2024-05-15 10:58:28.864491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.864518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.864655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.864676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.864818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.864836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.864976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.864994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.803 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:31.803 #13 NEW cov: 12030 ft: 14670 corp: 12/42b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeByte- 00:06:31.803 [2024-05-15 10:58:28.914753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.914780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.914917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.914935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.915065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.915083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.915220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.915238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.803 #14 NEW cov: 12030 ft: 14753 corp: 13/46b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:06:31.803 [2024-05-15 10:58:28.975210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.975237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.975378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.975399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.975548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.975566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.975706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.975724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:28.975866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:28.975886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.803 #15 NEW cov: 12030 ft: 14775 corp: 14/51b lim: 5 exec/s: 15 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:31.803 [2024-05-15 10:58:29.035121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:29.035147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:29.035273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:29.035289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.803 [2024-05-15 10:58:29.035426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.803 [2024-05-15 10:58:29.035441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.804 [2024-05-15 10:58:29.035585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.804 [2024-05-15 10:58:29.035600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.804 #16 NEW cov: 12030 ft: 14784 corp: 15/55b lim: 5 exec/s: 16 rss: 72Mb L: 4/5 MS: 1 CopyPart- 00:06:32.063 [2024-05-15 10:58:29.085282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.085310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.085450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.085467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.085604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.085622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.085755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.085771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.063 #17 NEW cov: 12030 ft: 14810 corp: 16/59b lim: 5 exec/s: 17 rss: 72Mb L: 4/5 MS: 1 CopyPart- 00:06:32.063 [2024-05-15 10:58:29.135456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.135483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.135635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.135654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.135801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.135822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.135967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.135984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.063 #18 NEW cov: 12030 ft: 14818 corp: 17/63b lim: 5 exec/s: 18 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:32.063 [2024-05-15 10:58:29.185603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.185632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.185771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.185789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.185925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.185944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.186078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.186095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.063 #19 NEW cov: 12030 ft: 14839 corp: 18/67b lim: 5 exec/s: 19 rss: 72Mb L: 4/5 MS: 1 CrossOver- 00:06:32.063 [2024-05-15 10:58:29.245632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.245660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.245814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.245833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.245975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.245993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.063 #20 NEW cov: 12030 ft: 15000 corp: 19/70b lim: 5 exec/s: 20 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:32.063 [2024-05-15 10:58:29.296031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.296058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.296198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.296217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.296361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.296384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.063 [2024-05-15 10:58:29.296528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.063 [2024-05-15 10:58:29.296547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.063 #21 NEW cov: 12030 ft: 15021 corp: 20/74b lim: 5 exec/s: 21 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:06:32.322 [2024-05-15 10:58:29.355603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.355631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.355782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.355801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.322 #22 NEW cov: 12030 ft: 15082 corp: 21/76b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:32.322 [2024-05-15 10:58:29.406277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.406304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.406447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.406464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.406600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.406619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.406753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.406771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.322 #23 NEW cov: 12030 ft: 15101 corp: 22/80b lim: 5 exec/s: 23 rss: 72Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:32.322 [2024-05-15 10:58:29.456581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.456608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.456753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.456771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.456908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.456925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.457070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.457088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.322 #24 NEW cov: 12030 ft: 15112 corp: 23/84b lim: 5 exec/s: 24 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:06:32.322 [2024-05-15 10:58:29.515781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.515808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.322 #25 NEW cov: 12030 ft: 15169 corp: 24/85b lim: 5 exec/s: 25 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:32.322 [2024-05-15 10:58:29.567126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.567154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.567301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.567321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.567463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.567482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.567613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.322 [2024-05-15 10:58:29.567630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.322 [2024-05-15 10:58:29.567769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.323 [2024-05-15 10:58:29.567785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.582 #26 NEW cov: 12030 ft: 15247 corp: 25/90b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:32.582 [2024-05-15 10:58:29.626969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.626998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.627141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.627161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.627294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.627310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.627457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.627478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.582 #27 NEW cov: 12030 ft: 15256 corp: 26/94b lim: 5 exec/s: 27 rss: 73Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:32.582 [2024-05-15 10:58:29.687500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.687531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.687679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.687700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.687843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.687860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.688001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.688019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.688165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.688183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.582 #28 NEW cov: 12030 ft: 15279 corp: 27/99b lim: 5 exec/s: 28 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:32.582 [2024-05-15 10:58:29.747120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.747150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.747298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.747317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.747459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.747478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.582 #29 NEW cov: 12030 ft: 15295 corp: 28/102b lim: 5 exec/s: 29 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:32.582 [2024-05-15 10:58:29.797665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.797692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.797828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.797845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.797985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.798006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.582 [2024-05-15 10:58:29.798143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.582 [2024-05-15 10:58:29.798161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.582 #30 NEW cov: 12030 ft: 15316 corp: 29/106b lim: 5 exec/s: 30 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:32.842 [2024-05-15 10:58:29.847806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.847834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.847966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.847983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.848129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.848144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.848284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.848301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.842 #31 NEW cov: 12030 ft: 15326 corp: 30/110b lim: 5 exec/s: 31 rss: 73Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:32.842 [2024-05-15 10:58:29.907878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.907906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.908048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.908068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.908211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.908230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.908369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.908393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.842 #32 NEW cov: 12030 ft: 15327 corp: 31/114b lim: 5 exec/s: 32 rss: 73Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:32.842 [2024-05-15 10:58:29.968453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.968481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.968623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.968643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.968786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.968803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.968958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.968974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.842 [2024-05-15 10:58:29.969114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.842 [2024-05-15 10:58:29.969135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.842 #33 NEW cov: 12030 ft: 15336 corp: 32/119b lim: 5 exec/s: 16 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:32.842 #33 DONE cov: 12030 ft: 15336 corp: 32/119b lim: 5 exec/s: 16 rss: 73Mb 00:06:32.842 Done 33 runs in 2 second(s) 00:06:32.842 [2024-05-15 10:58:29.999396] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.101 10:58:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:33.101 [2024-05-15 10:58:30.173167] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:33.101 [2024-05-15 10:58:30.173238] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403621 ] 00:06:33.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.360 [2024-05-15 10:58:30.429284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.360 [2024-05-15 10:58:30.525223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.360 [2024-05-15 10:58:30.585363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.360 [2024-05-15 10:58:30.601315] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:33.360 [2024-05-15 10:58:30.601749] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:33.360 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.360 INFO: Seed: 2297881103 00:06:33.619 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:33.619 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:33.619 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:33.619 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.619 [2024-05-15 10:58:30.657033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.657066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.619 #2 INITED cov: 11786 ft: 11787 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:33.619 [2024-05-15 10:58:30.696974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.697000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.619 #3 NEW cov: 11916 ft: 12407 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:33.619 [2024-05-15 10:58:30.747280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.747306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.619 [2024-05-15 10:58:30.747364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.747378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.619 #4 NEW cov: 11922 ft: 13304 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:33.619 [2024-05-15 10:58:30.787332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.787357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.619 [2024-05-15 10:58:30.787435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.787450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.619 #5 NEW cov: 12007 ft: 13507 corp: 4/6b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:33.619 [2024-05-15 10:58:30.837501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.837525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.619 [2024-05-15 10:58:30.837583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.619 [2024-05-15 10:58:30.837596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.619 #6 NEW cov: 12007 ft: 13589 corp: 5/8b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:33.878 [2024-05-15 10:58:30.887684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:30.887711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:30.887770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:30.887784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.878 #7 NEW cov: 12007 ft: 13636 corp: 6/10b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeByte- 00:06:33.878 [2024-05-15 10:58:30.937790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:30.937816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:30.937886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:30.937901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.878 #8 NEW cov: 12007 ft: 13694 corp: 7/12b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:33.878 [2024-05-15 10:58:30.987859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:30.987884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:30.987957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:30.987971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.878 #9 NEW cov: 12007 ft: 13759 corp: 8/14b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:33.878 [2024-05-15 10:58:31.027846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.027871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.878 #10 NEW cov: 12007 ft: 13893 corp: 9/15b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 EraseBytes- 00:06:33.878 [2024-05-15 10:58:31.068582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.068608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:31.068664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.068678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:31.068737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.068751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:31.068804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.068818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.878 [2024-05-15 10:58:31.068871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.068884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.878 #11 NEW cov: 12007 ft: 14333 corp: 10/20b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:33.878 [2024-05-15 10:58:31.108103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.878 [2024-05-15 10:58:31.108129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.878 #12 NEW cov: 12007 ft: 14383 corp: 11/21b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:06:34.138 [2024-05-15 10:58:31.148217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.148243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.138 #13 NEW cov: 12007 ft: 14406 corp: 12/22b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:06:34.138 [2024-05-15 10:58:31.198995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.199020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.199077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.199091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.199147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.199160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.199216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.199229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.199282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.199295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.138 #14 NEW cov: 12007 ft: 14480 corp: 13/27b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:34.138 [2024-05-15 10:58:31.248500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.248527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.138 #15 NEW cov: 12007 ft: 14507 corp: 14/28b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 CopyPart- 00:06:34.138 [2024-05-15 10:58:31.288753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.288778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.288833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.288847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.138 #16 NEW cov: 12007 ft: 14605 corp: 15/30b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeBit- 00:06:34.138 [2024-05-15 10:58:31.329181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.329205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.329261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.329274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.329328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.329341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.329396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.329408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.138 #17 NEW cov: 12007 ft: 14638 corp: 16/34b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:34.138 [2024-05-15 10:58:31.379479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.379503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.379561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.379574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.379628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.379641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.379698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.379711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.138 [2024-05-15 10:58:31.379765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.138 [2024-05-15 10:58:31.379780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.398 #18 NEW cov: 12007 ft: 14668 corp: 17/39b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:06:34.398 [2024-05-15 10:58:31.429625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.429649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.398 [2024-05-15 10:58:31.429708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.429721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.398 [2024-05-15 10:58:31.429777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.429790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.398 [2024-05-15 10:58:31.429844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.429857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.398 [2024-05-15 10:58:31.429909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.429923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.398 #19 NEW cov: 12007 ft: 14678 corp: 18/44b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:34.398 [2024-05-15 10:58:31.479134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.479158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.398 #20 NEW cov: 12007 ft: 14699 corp: 19/45b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:34.398 [2024-05-15 10:58:31.529587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.529612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.398 [2024-05-15 10:58:31.529667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.529681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.398 [2024-05-15 10:58:31.529753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.398 [2024-05-15 10:58:31.529767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.657 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:34.657 #21 NEW cov: 12030 ft: 14890 corp: 20/48b lim: 5 exec/s: 21 rss: 71Mb L: 3/5 MS: 1 InsertByte- 00:06:34.657 [2024-05-15 10:58:31.850923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.657 [2024-05-15 10:58:31.850975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.657 [2024-05-15 10:58:31.851046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.657 [2024-05-15 10:58:31.851068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.657 [2024-05-15 10:58:31.851135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.657 [2024-05-15 10:58:31.851156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.657 [2024-05-15 10:58:31.851227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.657 [2024-05-15 10:58:31.851248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.657 [2024-05-15 10:58:31.851317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.657 [2024-05-15 10:58:31.851337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.657 #22 NEW cov: 12030 ft: 14956 corp: 21/53b lim: 5 exec/s: 22 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:34.657 [2024-05-15 10:58:31.890146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.657 [2024-05-15 10:58:31.890171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.657 #23 NEW cov: 12030 ft: 14981 corp: 22/54b lim: 5 exec/s: 23 rss: 71Mb L: 1/5 MS: 1 ChangeBit- 00:06:34.916 [2024-05-15 10:58:31.940573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:31.940598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.916 #24 NEW cov: 12030 ft: 15019 corp: 23/55b lim: 5 exec/s: 24 rss: 71Mb L: 1/5 MS: 1 ChangeBit- 00:06:34.916 [2024-05-15 10:58:31.990640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:31.990666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:31.990723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:31.990736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.916 #25 NEW cov: 12030 ft: 15053 corp: 24/57b lim: 5 exec/s: 25 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:34.916 [2024-05-15 10:58:32.040944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.040968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.041027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.041040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.041100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.041114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.916 #26 NEW cov: 12030 ft: 15087 corp: 25/60b lim: 5 exec/s: 26 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:34.916 [2024-05-15 10:58:32.091091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.091115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.091173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.091187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.091242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.091255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.916 #27 NEW cov: 12030 ft: 15094 corp: 26/63b lim: 5 exec/s: 27 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:06:34.916 [2024-05-15 10:58:32.131054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.131078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.131133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.131146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.916 #28 NEW cov: 12030 ft: 15099 corp: 27/65b lim: 5 exec/s: 28 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:34.916 [2024-05-15 10:58:32.171654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.171678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.171736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.171749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.171803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.171816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.171868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.171881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.916 [2024-05-15 10:58:32.171935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.916 [2024-05-15 10:58:32.171948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.176 #29 NEW cov: 12030 ft: 15160 corp: 28/70b lim: 5 exec/s: 29 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:35.176 [2024-05-15 10:58:32.221121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.221147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.176 #30 NEW cov: 12030 ft: 15188 corp: 29/71b lim: 5 exec/s: 30 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:06:35.176 [2024-05-15 10:58:32.261366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.261396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.261468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.261482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.176 #31 NEW cov: 12030 ft: 15202 corp: 30/73b lim: 5 exec/s: 31 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:06:35.176 [2024-05-15 10:58:32.302018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.302044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.302102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.302115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.302168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.302181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.302236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.302250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.302302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.302315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.176 #32 NEW cov: 12030 ft: 15211 corp: 31/78b lim: 5 exec/s: 32 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:35.176 [2024-05-15 10:58:32.342105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.342131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.342186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.342199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.342255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.342271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.342325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.342338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.342393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.342422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.176 #33 NEW cov: 12030 ft: 15221 corp: 32/83b lim: 5 exec/s: 33 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:35.176 [2024-05-15 10:58:32.381734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.381759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.176 [2024-05-15 10:58:32.381830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.381844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.176 #34 NEW cov: 12030 ft: 15243 corp: 33/85b lim: 5 exec/s: 34 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:35.176 [2024-05-15 10:58:32.421697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.176 [2024-05-15 10:58:32.421721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.436 #35 NEW cov: 12030 ft: 15257 corp: 34/86b lim: 5 exec/s: 35 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:35.436 [2024-05-15 10:58:32.472324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.472350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.472411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.472425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.472479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.472492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.472545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.472558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.436 #36 NEW cov: 12030 ft: 15283 corp: 35/90b lim: 5 exec/s: 36 rss: 72Mb L: 4/5 MS: 1 InsertByte- 00:06:35.436 [2024-05-15 10:58:32.522152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.522176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.522237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.522251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.436 #37 NEW cov: 12030 ft: 15292 corp: 36/92b lim: 5 exec/s: 37 rss: 72Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:35.436 [2024-05-15 10:58:32.572614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.572639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.572693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.572706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.572762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.572775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.572829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.572842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.436 #38 NEW cov: 12030 ft: 15317 corp: 37/96b lim: 5 exec/s: 38 rss: 72Mb L: 4/5 MS: 1 InsertByte- 00:06:35.436 [2024-05-15 10:58:32.612938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.612963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.613018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.613031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.613089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.613102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.613157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.613170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.613225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.613239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.652989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.653013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.653088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.653102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.653156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.653169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.653224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.653238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.436 [2024-05-15 10:58:32.653292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.436 [2024-05-15 10:58:32.653304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.436 #40 NEW cov: 12030 ft: 15344 corp: 38/101b lim: 5 exec/s: 20 rss: 72Mb L: 5/5 MS: 2 CopyPart-ChangeBinInt- 00:06:35.436 #40 DONE cov: 12030 ft: 15344 corp: 38/101b lim: 5 exec/s: 20 rss: 72Mb 00:06:35.436 Done 40 runs in 2 second(s) 00:06:35.436 [2024-05-15 10:58:32.673677] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.696 10:58:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:35.696 [2024-05-15 10:58:32.842212] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:35.696 [2024-05-15 10:58:32.842286] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404150 ] 00:06:35.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.955 [2024-05-15 10:58:33.098275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.955 [2024-05-15 10:58:33.190321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.213 [2024-05-15 10:58:33.249892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.213 [2024-05-15 10:58:33.265844] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:36.213 [2024-05-15 10:58:33.266271] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:36.213 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.213 INFO: Seed: 667922145 00:06:36.213 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:36.213 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:36.213 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:36.213 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.213 #2 INITED exec/s: 0 rss: 63Mb 00:06:36.213 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.213 This may also happen if the target rejected all inputs we tried so far 00:06:36.213 [2024-05-15 10:58:33.311557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.213 [2024-05-15 10:58:33.311585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.213 [2024-05-15 10:58:33.311661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.213 [2024-05-15 10:58:33.311675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.471 NEW_FUNC[1/685]: 0x48eb90 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:36.472 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.472 #3 NEW cov: 11809 ft: 11810 corp: 2/23b lim: 40 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:06:36.472 [2024-05-15 10:58:33.642432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.472 [2024-05-15 10:58:33.642466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.472 [2024-05-15 10:58:33.642541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.472 [2024-05-15 10:58:33.642556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.472 #4 NEW cov: 11939 ft: 12502 corp: 3/45b lim: 40 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 ChangeByte- 00:06:36.472 [2024-05-15 10:58:33.692660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.472 [2024-05-15 10:58:33.692688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.472 [2024-05-15 10:58:33.692754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.472 [2024-05-15 10:58:33.692771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.472 [2024-05-15 10:58:33.692832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:9999993b cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.472 [2024-05-15 10:58:33.692846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.472 #5 NEW cov: 11945 ft: 12944 corp: 4/70b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 CrossOver- 00:06:36.731 [2024-05-15 10:58:33.742768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.742793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.742858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.742872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.742935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.742949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.731 #6 NEW cov: 12030 ft: 13217 corp: 5/95b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 CrossOver- 00:06:36.731 [2024-05-15 10:58:33.783012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.783038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.783103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999930 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.783117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.783181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303099 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.783195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.783256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.783269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.731 #7 NEW cov: 12030 ft: 13701 corp: 6/132b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:36.731 [2024-05-15 10:58:33.832889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.832916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.832980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.832994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.731 #8 NEW cov: 12030 ft: 13854 corp: 7/154b lim: 40 exec/s: 0 rss: 70Mb L: 22/37 MS: 1 ChangeBinInt- 00:06:36.731 [2024-05-15 10:58:33.873248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.873277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.873338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.873352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.873415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:9999993b cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.873429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.873488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.873502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.731 #9 NEW cov: 12030 ft: 13900 corp: 8/193b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:36.731 [2024-05-15 10:58:33.923274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.923299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.923378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.923397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.923472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:66666666 cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.923487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.731 #10 NEW cov: 12030 ft: 13913 corp: 9/222b lim: 40 exec/s: 0 rss: 71Mb L: 29/39 MS: 1 InsertRepeatedBytes- 00:06:36.731 [2024-05-15 10:58:33.973510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.973536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.973614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999930 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.973628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.973690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30323099 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.973703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.731 [2024-05-15 10:58:33.973763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.731 [2024-05-15 10:58:33.973777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.990 #11 NEW cov: 12030 ft: 13963 corp: 10/259b lim: 40 exec/s: 0 rss: 71Mb L: 37/39 MS: 1 ChangeBit- 00:06:36.990 [2024-05-15 10:58:34.023535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.990 [2024-05-15 10:58:34.023561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.990 [2024-05-15 10:58:34.023624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.990 [2024-05-15 10:58:34.023638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.990 [2024-05-15 10:58:34.023701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.990 [2024-05-15 10:58:34.023715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.990 #17 NEW cov: 12030 ft: 14010 corp: 11/284b lim: 40 exec/s: 0 rss: 71Mb L: 25/39 MS: 1 CrossOver- 00:06:36.990 [2024-05-15 10:58:34.063663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.990 [2024-05-15 10:58:34.063689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.063752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.063766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.063828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:66666666 cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.063841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.991 #23 NEW cov: 12030 ft: 14040 corp: 12/313b lim: 40 exec/s: 0 rss: 71Mb L: 29/39 MS: 1 ShuffleBytes- 00:06:36.991 [2024-05-15 10:58:34.113669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.113694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.113755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.113769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.991 #24 NEW cov: 12030 ft: 14052 corp: 13/335b lim: 40 exec/s: 0 rss: 71Mb L: 22/39 MS: 1 CMP- DE: "\377\014"- 00:06:36.991 [2024-05-15 10:58:34.153912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.153937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.154019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.154033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.154095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:9999993b cdw11:ffffff99 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.154108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.991 #25 NEW cov: 12030 ft: 14068 corp: 14/361b lim: 40 exec/s: 0 rss: 71Mb L: 26/39 MS: 1 EraseBytes- 00:06:36.991 [2024-05-15 10:58:34.203937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:999999ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.203963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.204041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.204056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.991 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:36.991 #26 NEW cov: 12053 ft: 14182 corp: 15/383b lim: 40 exec/s: 0 rss: 71Mb L: 22/39 MS: 1 EraseBytes- 00:06:36.991 [2024-05-15 10:58:34.244160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.244186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.244266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99998999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.244280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.991 [2024-05-15 10:58:34.244343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.991 [2024-05-15 10:58:34.244357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.250 #27 NEW cov: 12053 ft: 14232 corp: 16/408b lim: 40 exec/s: 0 rss: 71Mb L: 25/39 MS: 1 ChangeBit- 00:06:37.250 [2024-05-15 10:58:34.284137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.284163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.284243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.284257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.250 #28 NEW cov: 12053 ft: 14251 corp: 17/430b lim: 40 exec/s: 28 rss: 71Mb L: 22/39 MS: 1 PersAutoDict- DE: "\377\014"- 00:06:37.250 [2024-05-15 10:58:34.324411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.324437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.324498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.324513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.324572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:ff0c9999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.324585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.250 #29 NEW cov: 12053 ft: 14258 corp: 18/457b lim: 40 exec/s: 29 rss: 71Mb L: 27/39 MS: 1 PersAutoDict- DE: "\377\014"- 00:06:37.250 [2024-05-15 10:58:34.364248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a9b9b9b cdw11:9b9b9b9b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.364273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.250 #31 NEW cov: 12053 ft: 14611 corp: 19/471b lim: 40 exec/s: 31 rss: 71Mb L: 14/39 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:37.250 [2024-05-15 10:58:34.404614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9999ff0c cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.404638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.404723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:89999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.404737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.404799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.404812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.250 #32 NEW cov: 12053 ft: 14669 corp: 20/498b lim: 40 exec/s: 32 rss: 72Mb L: 27/39 MS: 1 PersAutoDict- DE: "\377\014"- 00:06:37.250 [2024-05-15 10:58:34.454461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0aff0c9b cdw11:9b9b9b9b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.454486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.250 #33 NEW cov: 12053 ft: 14730 corp: 21/512b lim: 40 exec/s: 33 rss: 72Mb L: 14/39 MS: 1 PersAutoDict- DE: "\377\014"- 00:06:37.250 [2024-05-15 10:58:34.504906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff99ff0c cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.504932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.505014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:89999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.505029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.250 [2024-05-15 10:58:34.505093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.250 [2024-05-15 10:58:34.505106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.510 #34 NEW cov: 12053 ft: 14759 corp: 22/539b lim: 40 exec/s: 34 rss: 72Mb L: 27/39 MS: 1 ChangeByte- 00:06:37.510 [2024-05-15 10:58:34.555029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.555054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.555122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.555136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.555203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:66666666 cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.555216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.510 #35 NEW cov: 12053 ft: 14834 corp: 23/568b lim: 40 exec/s: 35 rss: 72Mb L: 29/39 MS: 1 ChangeByte- 00:06:37.510 [2024-05-15 10:58:34.594973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999899 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.594999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.595060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.595074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.510 #36 NEW cov: 12053 ft: 14843 corp: 24/590b lim: 40 exec/s: 36 rss: 72Mb L: 22/39 MS: 1 ChangeBit- 00:06:37.510 [2024-05-15 10:58:34.635078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.635103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.635166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.635180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.510 #42 NEW cov: 12053 ft: 14879 corp: 25/612b lim: 40 exec/s: 42 rss: 72Mb L: 22/39 MS: 1 CopyPart- 00:06:37.510 [2024-05-15 10:58:34.685535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.685560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.685640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999930 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.685654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.685717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.685730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.510 [2024-05-15 10:58:34.685790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.685803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.510 #43 NEW cov: 12053 ft: 14902 corp: 26/649b lim: 40 exec/s: 43 rss: 72Mb L: 37/39 MS: 1 ChangeBinInt- 00:06:37.510 [2024-05-15 10:58:34.725224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.725248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.510 #44 NEW cov: 12053 ft: 14948 corp: 27/664b lim: 40 exec/s: 44 rss: 72Mb L: 15/39 MS: 1 EraseBytes- 00:06:37.510 [2024-05-15 10:58:34.765357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0aff5c0c cdw11:9b9b9b9b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.510 [2024-05-15 10:58:34.765390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.769 #45 NEW cov: 12053 ft: 14997 corp: 28/679b lim: 40 exec/s: 45 rss: 72Mb L: 15/39 MS: 1 InsertByte- 00:06:37.770 [2024-05-15 10:58:34.815536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fffff7f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.815561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.770 #48 NEW cov: 12053 ft: 15011 corp: 29/687b lim: 40 exec/s: 48 rss: 72Mb L: 8/39 MS: 3 PersAutoDict-ChangeBinInt-InsertRepeatedBytes- DE: "\377\014"- 00:06:37.770 [2024-05-15 10:58:34.855632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:3afffff7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.855657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.770 #49 NEW cov: 12053 ft: 15028 corp: 30/696b lim: 40 exec/s: 49 rss: 72Mb L: 9/39 MS: 1 InsertByte- 00:06:37.770 [2024-05-15 10:58:34.906048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:999999a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.906073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.770 [2024-05-15 10:58:34.906138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99996f66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.906151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.770 [2024-05-15 10:58:34.906215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:66666666 cdw11:66ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.906228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.770 #50 NEW cov: 12053 ft: 15065 corp: 31/725b lim: 40 exec/s: 50 rss: 72Mb L: 29/39 MS: 1 ChangeBinInt- 00:06:37.770 [2024-05-15 10:58:34.946020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.946045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.770 [2024-05-15 10:58:34.946120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.946134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.770 #51 NEW cov: 12053 ft: 15086 corp: 32/745b lim: 40 exec/s: 51 rss: 73Mb L: 20/39 MS: 1 EraseBytes- 00:06:37.770 [2024-05-15 10:58:34.986099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.986125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.770 [2024-05-15 10:58:34.986204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:34.986218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.770 #52 NEW cov: 12053 ft: 15100 corp: 33/767b lim: 40 exec/s: 52 rss: 73Mb L: 22/39 MS: 1 ShuffleBytes- 00:06:37.770 [2024-05-15 10:58:35.026341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:35.026367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.770 [2024-05-15 10:58:35.026433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999900 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:35.026447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.770 [2024-05-15 10:58:35.026508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.770 [2024-05-15 10:58:35.026522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.075 #53 NEW cov: 12053 ft: 15103 corp: 34/797b lim: 40 exec/s: 53 rss: 73Mb L: 30/39 MS: 1 InsertRepeatedBytes- 00:06:38.075 [2024-05-15 10:58:35.066454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff99ff0c cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.075 [2024-05-15 10:58:35.066479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.075 [2024-05-15 10:58:35.066544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999999 cdw11:89999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.075 [2024-05-15 10:58:35.066558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.075 [2024-05-15 10:58:35.066619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.075 [2024-05-15 10:58:35.066633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.075 #54 NEW cov: 12053 ft: 15108 corp: 35/824b lim: 40 exec/s: 54 rss: 73Mb L: 27/39 MS: 1 CopyPart- 00:06:38.075 [2024-05-15 10:58:35.116780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.116805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.116867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99999930 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.116880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.116942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30323099 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.116955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.117016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.117029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.076 #55 NEW cov: 12053 ft: 15129 corp: 36/856b lim: 40 exec/s: 55 rss: 73Mb L: 32/39 MS: 1 EraseBytes- 00:06:38.076 [2024-05-15 10:58:35.166647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:999999ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.166671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.166735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.166748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.076 #56 NEW cov: 12053 ft: 15134 corp: 37/878b lim: 40 exec/s: 56 rss: 73Mb L: 22/39 MS: 1 ChangeByte- 00:06:38.076 [2024-05-15 10:58:35.217021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999900 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.217046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.217123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000099 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.217137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.217196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.217209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.217267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:9999993b cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.217281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.076 #57 NEW cov: 12053 ft: 15145 corp: 38/911b lim: 40 exec/s: 57 rss: 73Mb L: 33/39 MS: 1 InsertRepeatedBytes- 00:06:38.076 [2024-05-15 10:58:35.257030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99999999 cdw11:99d99999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.257054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.257136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:99998999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.257150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.076 [2024-05-15 10:58:35.257214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99999999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.257227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.076 #58 NEW cov: 12053 ft: 15151 corp: 39/936b lim: 40 exec/s: 58 rss: 73Mb L: 25/39 MS: 1 ChangeBit- 00:06:38.076 [2024-05-15 10:58:35.296880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:99998999 cdw11:99999999 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.076 [2024-05-15 10:58:35.296905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.370 #59 NEW cov: 12053 ft: 15167 corp: 40/951b lim: 40 exec/s: 29 rss: 73Mb L: 15/39 MS: 1 ChangeBit- 00:06:38.370 #59 DONE cov: 12053 ft: 15167 corp: 40/951b lim: 40 exec/s: 29 rss: 73Mb 00:06:38.370 ###### Recommended dictionary. ###### 00:06:38.370 "\377\014" # Uses: 5 00:06:38.370 ###### End of recommended dictionary. ###### 00:06:38.370 Done 59 runs in 2 second(s) 00:06:38.370 [2024-05-15 10:58:35.327832] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.370 10:58:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:38.370 [2024-05-15 10:58:35.500019] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:38.370 [2024-05-15 10:58:35.500090] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404502 ] 00:06:38.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.629 [2024-05-15 10:58:35.751741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.629 [2024-05-15 10:58:35.837941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.888 [2024-05-15 10:58:35.897313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.888 [2024-05-15 10:58:35.913257] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:38.888 [2024-05-15 10:58:35.913705] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:38.888 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.888 INFO: Seed: 3313936092 00:06:38.888 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:38.888 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:38.888 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:38.888 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.888 #2 INITED exec/s: 0 rss: 64Mb 00:06:38.888 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:38.888 This may also happen if the target rejected all inputs we tried so far 00:06:38.888 [2024-05-15 10:58:35.962958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffffff cdw11:ffff08ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.888 [2024-05-15 10:58:35.962986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.146 NEW_FUNC[1/686]: 0x490900 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:39.146 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.146 #22 NEW cov: 11821 ft: 11809 corp: 2/13b lim: 40 exec/s: 0 rss: 70Mb L: 12/12 MS: 5 CrossOver-EraseBytes-ChangeBit-InsertRepeatedBytes-CopyPart- 00:06:39.146 [2024-05-15 10:58:36.273624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.146 [2024-05-15 10:58:36.273664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.146 #26 NEW cov: 11951 ft: 12438 corp: 3/22b lim: 40 exec/s: 0 rss: 70Mb L: 9/12 MS: 4 ChangeBit-ChangeBinInt-ChangeByte-CMP- DE: "ow\012y?\373\205\000"- 00:06:39.146 [2024-05-15 10:58:36.313600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.146 [2024-05-15 10:58:36.313626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.146 #27 NEW cov: 11957 ft: 12562 corp: 4/31b lim: 40 exec/s: 0 rss: 70Mb L: 9/12 MS: 1 PersAutoDict- DE: "ow\012y?\373\205\000"- 00:06:39.146 [2024-05-15 10:58:36.343676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.146 [2024-05-15 10:58:36.343701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.146 #28 NEW cov: 12042 ft: 12804 corp: 5/40b lim: 40 exec/s: 0 rss: 70Mb L: 9/12 MS: 1 CrossOver- 00:06:39.146 [2024-05-15 10:58:36.393843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffffff cdw11:ffff08ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.146 [2024-05-15 10:58:36.393868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.405 #29 NEW cov: 12042 ft: 12968 corp: 6/52b lim: 40 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 ChangeBit- 00:06:39.405 [2024-05-15 10:58:36.443957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.443982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.405 #31 NEW cov: 12042 ft: 13044 corp: 7/61b lim: 40 exec/s: 0 rss: 71Mb L: 9/12 MS: 2 ChangeByte-PersAutoDict- DE: "ow\012y?\373\205\000"- 00:06:39.405 [2024-05-15 10:58:36.484072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffffff cdw11:ffff0804 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.484097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.405 #42 NEW cov: 12042 ft: 13107 corp: 8/73b lim: 40 exec/s: 0 rss: 71Mb L: 12/12 MS: 1 ChangeBinInt- 00:06:39.405 [2024-05-15 10:58:36.534255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ff38500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.534280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.405 #43 NEW cov: 12042 ft: 13169 corp: 9/82b lim: 40 exec/s: 0 rss: 71Mb L: 9/12 MS: 1 ChangeBit- 00:06:39.405 [2024-05-15 10:58:36.574622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:77000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.574647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.405 [2024-05-15 10:58:36.574726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.574740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.405 [2024-05-15 10:58:36.574799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.574812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.405 #48 NEW cov: 12042 ft: 13988 corp: 10/110b lim: 40 exec/s: 0 rss: 71Mb L: 28/28 MS: 5 EraseBytes-ChangeBit-EraseBytes-EraseBytes-InsertRepeatedBytes- 00:06:39.405 [2024-05-15 10:58:36.624450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffffff cdw11:ff0408ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.405 [2024-05-15 10:58:36.624475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.405 #49 NEW cov: 12042 ft: 14036 corp: 11/122b lim: 40 exec/s: 0 rss: 71Mb L: 12/28 MS: 1 ShuffleBytes- 00:06:39.665 [2024-05-15 10:58:36.674558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff08ffff cdw11:ffffff08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.665 [2024-05-15 10:58:36.674583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.665 #50 NEW cov: 12042 ft: 14076 corp: 12/135b lim: 40 exec/s: 0 rss: 71Mb L: 13/28 MS: 1 CopyPart- 00:06:39.665 [2024-05-15 10:58:36.714652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a3f cdw11:fb857900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.665 [2024-05-15 10:58:36.714676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.665 #51 NEW cov: 12042 ft: 14089 corp: 13/144b lim: 40 exec/s: 0 rss: 71Mb L: 9/28 MS: 1 ShuffleBytes- 00:06:39.665 [2024-05-15 10:58:36.764841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:ffffff07 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.665 [2024-05-15 10:58:36.764866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.665 #52 NEW cov: 12042 ft: 14120 corp: 14/153b lim: 40 exec/s: 0 rss: 71Mb L: 9/28 MS: 1 ChangeBinInt- 00:06:39.665 [2024-05-15 10:58:36.804934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f77380a cdw11:793ffb85 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.665 [2024-05-15 10:58:36.804959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.665 #53 NEW cov: 12042 ft: 14160 corp: 15/163b lim: 40 exec/s: 0 rss: 71Mb L: 10/28 MS: 1 InsertByte- 00:06:39.665 [2024-05-15 10:58:36.845027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.665 [2024-05-15 10:58:36.845053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.665 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:39.665 #54 NEW cov: 12065 ft: 14206 corp: 16/172b lim: 40 exec/s: 0 rss: 71Mb L: 9/28 MS: 1 ChangeByte- 00:06:39.665 [2024-05-15 10:58:36.895182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:ffe3ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.665 [2024-05-15 10:58:36.895208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.665 #55 NEW cov: 12065 ft: 14250 corp: 17/182b lim: 40 exec/s: 0 rss: 71Mb L: 10/28 MS: 1 InsertByte- 00:06:39.924 [2024-05-15 10:58:36.935460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb770a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:36.935486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.924 [2024-05-15 10:58:36.935542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:793ffb85 cdw11:00bb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:36.935555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.924 #56 NEW cov: 12065 ft: 14473 corp: 18/199b lim: 40 exec/s: 56 rss: 71Mb L: 17/28 MS: 1 CrossOver- 00:06:39.924 [2024-05-15 10:58:36.975736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:40b76060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:36.975762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.924 [2024-05-15 10:58:36.975820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:60606060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:36.975833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.924 [2024-05-15 10:58:36.975890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:60606060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:36.975904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.924 #60 NEW cov: 12065 ft: 14495 corp: 19/226b lim: 40 exec/s: 60 rss: 71Mb L: 27/28 MS: 4 InsertByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:39.924 [2024-05-15 10:58:37.015652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f77380a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:37.015677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.924 [2024-05-15 10:58:37.015750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:793ffb85 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.924 [2024-05-15 10:58:37.015763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.924 #61 NEW cov: 12065 ft: 14503 corp: 20/244b lim: 40 exec/s: 61 rss: 72Mb L: 18/28 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:39.925 [2024-05-15 10:58:37.065682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a2c cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.925 [2024-05-15 10:58:37.065708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.925 #62 NEW cov: 12065 ft: 14519 corp: 21/253b lim: 40 exec/s: 62 rss: 72Mb L: 9/28 MS: 1 ChangeByte- 00:06:39.925 [2024-05-15 10:58:37.105756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6ff7380a cdw11:793ffb85 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.925 [2024-05-15 10:58:37.105781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.925 #63 NEW cov: 12065 ft: 14523 corp: 22/263b lim: 40 exec/s: 63 rss: 72Mb L: 10/28 MS: 1 ChangeBit- 00:06:39.925 [2024-05-15 10:58:37.145866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a2c cdw11:3f000900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.925 [2024-05-15 10:58:37.145891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.925 #64 NEW cov: 12065 ft: 14548 corp: 23/272b lim: 40 exec/s: 64 rss: 72Mb L: 9/28 MS: 1 ChangeBinInt- 00:06:39.925 [2024-05-15 10:58:37.185998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a2c cdw11:3f24fb85 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.925 [2024-05-15 10:58:37.186024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.184 #65 NEW cov: 12065 ft: 14552 corp: 24/282b lim: 40 exec/s: 65 rss: 72Mb L: 10/28 MS: 1 InsertByte- 00:06:40.184 [2024-05-15 10:58:37.226096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f793f6f cdw11:770a793f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.184 [2024-05-15 10:58:37.226121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.184 #67 NEW cov: 12065 ft: 14554 corp: 25/297b lim: 40 exec/s: 67 rss: 72Mb L: 15/28 MS: 2 EraseBytes-PersAutoDict- DE: "ow\012y?\373\205\000"- 00:06:40.185 [2024-05-15 10:58:37.256207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f570a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.256232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.185 #68 NEW cov: 12065 ft: 14558 corp: 26/306b lim: 40 exec/s: 68 rss: 72Mb L: 9/28 MS: 1 ChangeBit- 00:06:40.185 [2024-05-15 10:58:37.296472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffff6f cdw11:770a793f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.296497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.296555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fb8500ff cdw11:ff0408ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.296580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.185 #69 NEW cov: 12065 ft: 14615 corp: 27/326b lim: 40 exec/s: 69 rss: 72Mb L: 20/28 MS: 1 PersAutoDict- DE: "ow\012y?\373\205\000"- 00:06:40.185 [2024-05-15 10:58:37.346761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:77000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.346786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.346847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.346861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.346920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.346934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.185 #70 NEW cov: 12065 ft: 14654 corp: 28/353b lim: 40 exec/s: 70 rss: 72Mb L: 27/28 MS: 1 EraseBytes- 00:06:40.185 [2024-05-15 10:58:37.397062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.397087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.397144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.397157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.397218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.397231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.397290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.397303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.185 #71 NEW cov: 12065 ft: 14948 corp: 29/392b lim: 40 exec/s: 71 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:40.185 [2024-05-15 10:58:37.436857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f6f7777 cdw11:0a0a79ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.436882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.185 [2024-05-15 10:58:37.436958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3ff793f cdw11:fb8500ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.185 [2024-05-15 10:58:37.436972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.444 #72 NEW cov: 12065 ft: 15029 corp: 30/409b lim: 40 exec/s: 72 rss: 72Mb L: 17/39 MS: 1 CrossOver- 00:06:40.444 [2024-05-15 10:58:37.477154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.444 [2024-05-15 10:58:37.477178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.444 [2024-05-15 10:58:37.477254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.444 [2024-05-15 10:58:37.477267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.444 [2024-05-15 10:58:37.477326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.444 [2024-05-15 10:58:37.477339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.444 #73 NEW cov: 12065 ft: 15043 corp: 31/439b lim: 40 exec/s: 73 rss: 72Mb L: 30/39 MS: 1 InsertRepeatedBytes- 00:06:40.444 [2024-05-15 10:58:37.516932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6ff7380a cdw11:793ffb85 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.444 [2024-05-15 10:58:37.516957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.444 #74 NEW cov: 12065 ft: 15050 corp: 32/449b lim: 40 exec/s: 74 rss: 72Mb L: 10/39 MS: 1 ChangeByte- 00:06:40.444 [2024-05-15 10:58:37.557037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffff2c cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.444 [2024-05-15 10:58:37.557061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.444 #75 NEW cov: 12065 ft: 15064 corp: 33/461b lim: 40 exec/s: 75 rss: 72Mb L: 12/39 MS: 1 CrossOver- 00:06:40.444 [2024-05-15 10:58:37.597297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08ffff04 cdw11:08ff793f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.444 [2024-05-15 10:58:37.597323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.445 [2024-05-15 10:58:37.597397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fb8500ff cdw11:ff0408ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.445 [2024-05-15 10:58:37.597414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.445 #76 NEW cov: 12065 ft: 15065 corp: 34/481b lim: 40 exec/s: 76 rss: 72Mb L: 20/39 MS: 1 CopyPart- 00:06:40.445 [2024-05-15 10:58:37.647577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:40b76060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.445 [2024-05-15 10:58:37.647601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.445 [2024-05-15 10:58:37.647660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:60606060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.445 [2024-05-15 10:58:37.647674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.445 [2024-05-15 10:58:37.647732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:60606060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.445 [2024-05-15 10:58:37.647745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.445 #77 NEW cov: 12065 ft: 15107 corp: 35/505b lim: 40 exec/s: 77 rss: 72Mb L: 24/39 MS: 1 EraseBytes- 00:06:40.445 [2024-05-15 10:58:37.697427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.445 [2024-05-15 10:58:37.697451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.704 #82 NEW cov: 12065 ft: 15120 corp: 36/514b lim: 40 exec/s: 82 rss: 72Mb L: 9/39 MS: 5 ChangeByte-CopyPart-CrossOver-EraseBytes-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:40.704 [2024-05-15 10:58:37.737556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff08ffff cdw11:ffffff08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.737580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.704 #83 NEW cov: 12065 ft: 15138 corp: 37/527b lim: 40 exec/s: 83 rss: 72Mb L: 13/39 MS: 1 ShuffleBytes- 00:06:40.704 [2024-05-15 10:58:37.787675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f77380a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.787700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.704 #84 NEW cov: 12065 ft: 15144 corp: 38/537b lim: 40 exec/s: 84 rss: 73Mb L: 10/39 MS: 1 EraseBytes- 00:06:40.704 [2024-05-15 10:58:37.837831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ff30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.837857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.704 #85 NEW cov: 12065 ft: 15151 corp: 39/546b lim: 40 exec/s: 85 rss: 73Mb L: 9/39 MS: 1 CrossOver- 00:06:40.704 [2024-05-15 10:58:37.888429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a79 cdw11:3ffb8500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.888455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.704 [2024-05-15 10:58:37.888507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.888521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.704 [2024-05-15 10:58:37.888573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.888589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.704 [2024-05-15 10:58:37.888640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff7f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.888653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.704 #86 NEW cov: 12065 ft: 15158 corp: 40/585b lim: 40 exec/s: 86 rss: 73Mb L: 39/39 MS: 1 ChangeBit- 00:06:40.704 [2024-05-15 10:58:37.938112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6f770a00 cdw11:ffe3ff79 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.704 [2024-05-15 10:58:37.938137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.704 #87 NEW cov: 12065 ft: 15174 corp: 41/595b lim: 40 exec/s: 43 rss: 73Mb L: 10/39 MS: 1 ShuffleBytes- 00:06:40.704 #87 DONE cov: 12065 ft: 15174 corp: 41/595b lim: 40 exec/s: 43 rss: 73Mb 00:06:40.704 ###### Recommended dictionary. ###### 00:06:40.704 "ow\012y?\373\205\000" # Uses: 4 00:06:40.704 "\000\000\000\000\000\000\000\000" # Uses: 1 00:06:40.704 ###### End of recommended dictionary. ###### 00:06:40.704 Done 87 runs in 2 second(s) 00:06:40.704 [2024-05-15 10:58:37.967241] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.964 10:58:38 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:40.964 [2024-05-15 10:58:38.139436] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:40.964 [2024-05-15 10:58:38.139537] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404985 ] 00:06:40.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.224 [2024-05-15 10:58:38.397060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.224 [2024-05-15 10:58:38.487084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.483 [2024-05-15 10:58:38.546784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.483 [2024-05-15 10:58:38.562740] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:41.483 [2024-05-15 10:58:38.563179] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:41.483 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.483 INFO: Seed: 1668956007 00:06:41.483 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:41.483 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:41.483 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:41.483 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.483 #2 INITED exec/s: 0 rss: 63Mb 00:06:41.483 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.483 This may also happen if the target rejected all inputs we tried so far 00:06:41.483 [2024-05-15 10:58:38.630060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1414ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.483 [2024-05-15 10:58:38.630097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.743 NEW_FUNC[1/685]: 0x492670 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:41.743 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.743 #6 NEW cov: 11793 ft: 11820 corp: 2/14b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 4 CopyPart-ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:06:41.743 [2024-05-15 10:58:38.970188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.743 [2024-05-15 10:58:38.970235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.743 NEW_FUNC[1/1]: 0x4c9810 in malloc_completion_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/bdev/malloc/bdev_malloc.c:870 00:06:41.743 #12 NEW cov: 11949 ft: 12481 corp: 3/23b lim: 40 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 InsertRepeatedBytes- 00:06:42.002 [2024-05-15 10:58:39.010219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.002 [2024-05-15 10:58:39.010249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.002 #13 NEW cov: 11955 ft: 12685 corp: 4/33b lim: 40 exec/s: 0 rss: 70Mb L: 10/13 MS: 1 InsertByte- 00:06:42.002 [2024-05-15 10:58:39.060345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:8c909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.002 [2024-05-15 10:58:39.060374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.002 #14 NEW cov: 12040 ft: 13066 corp: 5/43b lim: 40 exec/s: 0 rss: 70Mb L: 10/13 MS: 1 ChangeByte- 00:06:42.002 [2024-05-15 10:58:39.110013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:5d909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.002 [2024-05-15 10:58:39.110042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.002 #20 NEW cov: 12040 ft: 13256 corp: 6/54b lim: 40 exec/s: 0 rss: 70Mb L: 11/13 MS: 1 InsertByte- 00:06:42.002 [2024-05-15 10:58:39.150456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.002 [2024-05-15 10:58:39.150484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.002 #21 NEW cov: 12040 ft: 13349 corp: 7/63b lim: 40 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 EraseBytes- 00:06:42.002 [2024-05-15 10:58:39.190712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d1fbfbfb cdw11:fbfbfbfb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.002 [2024-05-15 10:58:39.190739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.002 #25 NEW cov: 12040 ft: 13432 corp: 8/76b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 4 CrossOver-ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:06:42.002 [2024-05-15 10:58:39.230741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a939025 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.002 [2024-05-15 10:58:39.230771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.002 #26 NEW cov: 12040 ft: 13489 corp: 9/86b lim: 40 exec/s: 0 rss: 71Mb L: 10/13 MS: 1 InsertByte- 00:06:42.262 [2024-05-15 10:58:39.280727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a909000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.280754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.262 [2024-05-15 10:58:39.280881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.280897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.262 #33 NEW cov: 12040 ft: 14278 corp: 10/106b lim: 40 exec/s: 0 rss: 71Mb L: 20/20 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:42.262 [2024-05-15 10:58:39.321002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:5d902590 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.321028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.262 #34 NEW cov: 12040 ft: 14308 corp: 11/120b lim: 40 exec/s: 0 rss: 71Mb L: 14/20 MS: 1 CrossOver- 00:06:42.262 [2024-05-15 10:58:39.371145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:9090d1fb cdw11:fbfbfbfb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.371173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.262 #35 NEW cov: 12040 ft: 14367 corp: 12/135b lim: 40 exec/s: 0 rss: 71Mb L: 15/20 MS: 1 CrossOver- 00:06:42.262 [2024-05-15 10:58:39.421323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:9090907a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.421349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.262 #41 NEW cov: 12040 ft: 14430 corp: 13/144b lim: 40 exec/s: 0 rss: 71Mb L: 9/20 MS: 1 ChangeByte- 00:06:42.262 [2024-05-15 10:58:39.461389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:8c909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.461414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.262 #42 NEW cov: 12040 ft: 14504 corp: 14/152b lim: 40 exec/s: 0 rss: 71Mb L: 8/20 MS: 1 EraseBytes- 00:06:42.262 [2024-05-15 10:58:39.511572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0e902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.262 [2024-05-15 10:58:39.511601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.521 #43 NEW cov: 12040 ft: 14543 corp: 15/162b lim: 40 exec/s: 0 rss: 71Mb L: 10/20 MS: 1 ChangeBit- 00:06:42.521 [2024-05-15 10:58:39.551606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:5d902590 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.521 [2024-05-15 10:58:39.551634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.521 #44 NEW cov: 12040 ft: 14560 corp: 16/176b lim: 40 exec/s: 0 rss: 71Mb L: 14/20 MS: 1 ChangeBit- 00:06:42.521 [2024-05-15 10:58:39.602595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.521 [2024-05-15 10:58:39.602621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.521 [2024-05-15 10:58:39.602744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.521 [2024-05-15 10:58:39.602761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.521 [2024-05-15 10:58:39.602881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a90 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.521 [2024-05-15 10:58:39.602898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.521 [2024-05-15 10:58:39.603015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:25905d90 cdw11:25909010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.603031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.522 #45 NEW cov: 12040 ft: 14891 corp: 17/212b lim: 40 exec/s: 45 rss: 71Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:42.522 [2024-05-15 10:58:39.651882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:5d909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.651909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.522 #46 NEW cov: 12040 ft: 14918 corp: 18/224b lim: 40 exec/s: 46 rss: 71Mb L: 12/36 MS: 1 InsertByte- 00:06:42.522 [2024-05-15 10:58:39.692066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:5d902590 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.692093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.522 #47 NEW cov: 12040 ft: 14950 corp: 19/238b lim: 40 exec/s: 47 rss: 71Mb L: 14/36 MS: 1 CrossOver- 00:06:42.522 [2024-05-15 10:58:39.732212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a7902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.732239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.522 #48 NEW cov: 12040 ft: 14960 corp: 20/247b lim: 40 exec/s: 48 rss: 71Mb L: 9/36 MS: 1 ChangeByte- 00:06:42.522 [2024-05-15 10:58:39.772480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:cfcfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.772505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.522 [2024-05-15 10:58:39.772641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.772661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.522 [2024-05-15 10:58:39.772792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cf909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.522 [2024-05-15 10:58:39.772808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.781 #49 NEW cov: 12040 ft: 15170 corp: 21/273b lim: 40 exec/s: 49 rss: 71Mb L: 26/36 MS: 1 InsertRepeatedBytes- 00:06:42.781 [2024-05-15 10:58:39.813193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0e902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.813219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.781 [2024-05-15 10:58:39.813349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:90900000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.813365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.781 [2024-05-15 10:58:39.813488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.813506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.781 [2024-05-15 10:58:39.813624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.813640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.781 #50 NEW cov: 12040 ft: 15195 corp: 22/311b lim: 40 exec/s: 50 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:42.781 [2024-05-15 10:58:39.862509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:9090d1fb cdw11:fb03fbfb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.862536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.781 #51 NEW cov: 12040 ft: 15204 corp: 23/326b lim: 40 exec/s: 51 rss: 71Mb L: 15/38 MS: 1 ChangeByte- 00:06:42.781 [2024-05-15 10:58:39.912725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a939025 cdw11:9090908f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.912751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.781 #52 NEW cov: 12040 ft: 15240 corp: 24/340b lim: 40 exec/s: 52 rss: 72Mb L: 14/38 MS: 1 InsertRepeatedBytes- 00:06:42.781 [2024-05-15 10:58:39.973239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0e902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.973267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.781 [2024-05-15 10:58:39.973393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a902590 cdw11:5d909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:39.973409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.781 #58 NEW cov: 12040 ft: 15252 corp: 25/356b lim: 40 exec/s: 58 rss: 72Mb L: 16/38 MS: 1 CrossOver- 00:06:42.781 [2024-05-15 10:58:40.023120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.781 [2024-05-15 10:58:40.023152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.781 #59 NEW cov: 12040 ft: 15331 corp: 26/365b lim: 40 exec/s: 59 rss: 72Mb L: 9/38 MS: 1 CopyPart- 00:06:43.040 [2024-05-15 10:58:40.074101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.040 [2024-05-15 10:58:40.074130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.040 [2024-05-15 10:58:40.074253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.040 [2024-05-15 10:58:40.074269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.040 [2024-05-15 10:58:40.074403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a90 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.040 [2024-05-15 10:58:40.074422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.040 [2024-05-15 10:58:40.074553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:25905d90 cdw11:25909010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.040 [2024-05-15 10:58:40.074571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.040 #60 NEW cov: 12040 ft: 15346 corp: 27/401b lim: 40 exec/s: 60 rss: 72Mb L: 36/38 MS: 1 ShuffleBytes- 00:06:43.040 [2024-05-15 10:58:40.123101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a90db75 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.040 [2024-05-15 10:58:40.123129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.040 #61 NEW cov: 12040 ft: 15439 corp: 28/411b lim: 40 exec/s: 61 rss: 72Mb L: 10/38 MS: 1 ChangeBinInt- 00:06:43.040 [2024-05-15 10:58:40.163544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:90250a90 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.040 [2024-05-15 10:58:40.163572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.040 #62 NEW cov: 12040 ft: 15447 corp: 29/420b lim: 40 exec/s: 62 rss: 72Mb L: 9/38 MS: 1 ShuffleBytes- 00:06:43.041 [2024-05-15 10:58:40.203472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a7902590 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.041 [2024-05-15 10:58:40.203500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.041 #63 NEW cov: 12040 ft: 15465 corp: 30/429b lim: 40 exec/s: 63 rss: 72Mb L: 9/38 MS: 1 ChangeByte- 00:06:43.041 [2024-05-15 10:58:40.253801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.041 [2024-05-15 10:58:40.253828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.041 #64 NEW cov: 12040 ft: 15480 corp: 31/438b lim: 40 exec/s: 64 rss: 72Mb L: 9/38 MS: 1 ShuffleBytes- 00:06:43.041 [2024-05-15 10:58:40.294171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a909000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.041 [2024-05-15 10:58:40.294198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.041 [2024-05-15 10:58:40.294320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.041 [2024-05-15 10:58:40.294337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.300 #65 NEW cov: 12040 ft: 15509 corp: 32/458b lim: 40 exec/s: 65 rss: 72Mb L: 20/38 MS: 1 ChangeBit- 00:06:43.300 [2024-05-15 10:58:40.344000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a939025 cdw11:9090b08f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.344028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.300 #66 NEW cov: 12040 ft: 15541 corp: 33/472b lim: 40 exec/s: 66 rss: 72Mb L: 14/38 MS: 1 ChangeBit- 00:06:43.300 [2024-05-15 10:58:40.394109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a902590 cdw11:9090907a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.394136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.300 #67 NEW cov: 12040 ft: 15552 corp: 34/481b lim: 40 exec/s: 67 rss: 72Mb L: 9/38 MS: 1 CopyPart- 00:06:43.300 [2024-05-15 10:58:40.444252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a7902590 cdw11:2a909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.444280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.300 #68 NEW cov: 12040 ft: 15559 corp: 35/491b lim: 40 exec/s: 68 rss: 72Mb L: 10/38 MS: 1 InsertByte- 00:06:43.300 [2024-05-15 10:58:40.484963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:cfcfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.484990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.300 [2024-05-15 10:58:40.485117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcfcfef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.485136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.300 [2024-05-15 10:58:40.485270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:cfcfcfcf cdw11:cf909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.485288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.300 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:43.300 #69 NEW cov: 12063 ft: 15652 corp: 36/517b lim: 40 exec/s: 69 rss: 73Mb L: 26/38 MS: 1 ChangeBit- 00:06:43.300 [2024-05-15 10:58:40.534557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a902590 cdw11:9090907a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.300 [2024-05-15 10:58:40.534583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.300 #70 NEW cov: 12063 ft: 15667 corp: 37/526b lim: 40 exec/s: 70 rss: 73Mb L: 9/38 MS: 1 ChangeBit- 00:06:43.560 [2024-05-15 10:58:40.585240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a90cfcf cdw11:cfcfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.560 [2024-05-15 10:58:40.585267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.560 [2024-05-15 10:58:40.585395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfefcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.560 [2024-05-15 10:58:40.585412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.560 [2024-05-15 10:58:40.585523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:cfcfcf90 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.560 [2024-05-15 10:58:40.585542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.560 #71 NEW cov: 12063 ft: 15689 corp: 38/550b lim: 40 exec/s: 35 rss: 73Mb L: 24/38 MS: 1 EraseBytes- 00:06:43.560 #71 DONE cov: 12063 ft: 15689 corp: 38/550b lim: 40 exec/s: 35 rss: 73Mb 00:06:43.560 Done 71 runs in 2 second(s) 00:06:43.560 [2024-05-15 10:58:40.614997] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.560 10:58:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:43.560 [2024-05-15 10:58:40.784967] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:43.560 [2024-05-15 10:58:40.785037] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405512 ] 00:06:43.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.819 [2024-05-15 10:58:41.035176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.078 [2024-05-15 10:58:41.128200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.078 [2024-05-15 10:58:41.187948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.078 [2024-05-15 10:58:41.203905] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:44.078 [2024-05-15 10:58:41.204339] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:44.078 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.078 INFO: Seed: 15977202 00:06:44.078 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:44.078 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:44.078 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:44.078 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.078 #2 INITED exec/s: 0 rss: 64Mb 00:06:44.078 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.078 This may also happen if the target rejected all inputs we tried so far 00:06:44.078 [2024-05-15 10:58:41.272162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.078 [2024-05-15 10:58:41.272195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.078 [2024-05-15 10:58:41.272269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.078 [2024-05-15 10:58:41.272284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.078 [2024-05-15 10:58:41.272357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.078 [2024-05-15 10:58:41.272372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.078 [2024-05-15 10:58:41.272456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.078 [2024-05-15 10:58:41.272471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.078 [2024-05-15 10:58:41.272545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.078 [2024-05-15 10:58:41.272560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.337 NEW_FUNC[1/683]: 0x494230 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:44.337 NEW_FUNC[2/683]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.337 #19 NEW cov: 11796 ft: 11801 corp: 2/41b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:44.337 [2024-05-15 10:58:41.601351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.337 [2024-05-15 10:58:41.601394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.337 [2024-05-15 10:58:41.601508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.337 [2024-05-15 10:58:41.601525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 NEW_FUNC[1/2]: 0x1d35540 in thread_update_stats /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:924 00:06:44.597 NEW_FUNC[2/2]: 0x1d373f0 in spdk_thread_get_last_tsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1320 00:06:44.597 #21 NEW cov: 11937 ft: 13166 corp: 3/61b lim: 40 exec/s: 0 rss: 70Mb L: 20/40 MS: 2 ChangeBinInt-CrossOver- 00:06:44.597 [2024-05-15 10:58:41.652072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.652104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.652230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.652249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.652375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.652396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.652522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.652537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.652667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.652681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.597 #27 NEW cov: 11943 ft: 13422 corp: 4/101b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:44.597 [2024-05-15 10:58:41.702146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.702172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.702310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.702327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.702463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.702480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.702605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.702620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.702747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.702764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.597 #28 NEW cov: 12028 ft: 13711 corp: 5/141b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:44.597 [2024-05-15 10:58:41.741698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00140000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.741725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.741864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.741879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 #29 NEW cov: 12028 ft: 13850 corp: 6/161b lim: 40 exec/s: 0 rss: 70Mb L: 20/40 MS: 1 ChangeBinInt- 00:06:44.597 [2024-05-15 10:58:41.792431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.792456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.792597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.792615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.792747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.792764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.792891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000f800 cdw11:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.792906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.793032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.793047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.597 #30 NEW cov: 12028 ft: 13904 corp: 7/201b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CrossOver- 00:06:44.597 [2024-05-15 10:58:41.832549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.832574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.832716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.597 [2024-05-15 10:58:41.832732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.597 [2024-05-15 10:58:41.832858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.598 [2024-05-15 10:58:41.832873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.598 [2024-05-15 10:58:41.832999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.598 [2024-05-15 10:58:41.833014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.598 [2024-05-15 10:58:41.833146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.598 [2024-05-15 10:58:41.833163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.598 #31 NEW cov: 12028 ft: 13989 corp: 8/241b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:06:44.857 [2024-05-15 10:58:41.882492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.882520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.882646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.882667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.882792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.882808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.882934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.882949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.857 #32 NEW cov: 12028 ft: 14028 corp: 9/278b lim: 40 exec/s: 0 rss: 71Mb L: 37/40 MS: 1 EraseBytes- 00:06:44.857 [2024-05-15 10:58:41.922762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.922790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.922924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.922940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.923068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.923086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.923206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000f800 cdw11:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.923223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.923342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.923358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:44.857 #33 NEW cov: 12028 ft: 14046 corp: 10/318b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:44.857 [2024-05-15 10:58:41.972649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.972673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.972801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.972817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.972946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.972961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:41.973095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:f8000000 cdw11:00140000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:41.973113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.857 #34 NEW cov: 12028 ft: 14084 corp: 11/356b lim: 40 exec/s: 0 rss: 71Mb L: 38/40 MS: 1 EraseBytes- 00:06:44.857 [2024-05-15 10:58:42.022653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.857 [2024-05-15 10:58:42.022679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.857 [2024-05-15 10:58:42.022797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.022815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.858 [2024-05-15 10:58:42.022939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.022955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.858 #35 NEW cov: 12028 ft: 14307 corp: 12/383b lim: 40 exec/s: 0 rss: 71Mb L: 27/40 MS: 1 EraseBytes- 00:06:44.858 [2024-05-15 10:58:42.062548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.062573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.858 [2024-05-15 10:58:42.062697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.062714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.858 #36 NEW cov: 12028 ft: 14357 corp: 13/403b lim: 40 exec/s: 0 rss: 71Mb L: 20/40 MS: 1 ChangeBit- 00:06:44.858 [2024-05-15 10:58:42.102955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.102981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.858 [2024-05-15 10:58:42.103113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.103129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.858 [2024-05-15 10:58:42.103256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.858 [2024-05-15 10:58:42.103272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.117 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:45.117 #37 NEW cov: 12051 ft: 14434 corp: 14/427b lim: 40 exec/s: 0 rss: 71Mb L: 24/40 MS: 1 EraseBytes- 00:06:45.117 [2024-05-15 10:58:42.153530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.117 [2024-05-15 10:58:42.153556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.117 [2024-05-15 10:58:42.153674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.117 [2024-05-15 10:58:42.153706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.117 [2024-05-15 10:58:42.153829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.117 [2024-05-15 10:58:42.153844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.117 [2024-05-15 10:58:42.153967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.117 [2024-05-15 10:58:42.153983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.117 [2024-05-15 10:58:42.154103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.117 [2024-05-15 10:58:42.154120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.117 #38 NEW cov: 12051 ft: 14475 corp: 15/467b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:45.117 [2024-05-15 10:58:42.192964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.117 [2024-05-15 10:58:42.192990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.117 [2024-05-15 10:58:42.193119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00001400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.193135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.118 #39 NEW cov: 12051 ft: 14495 corp: 16/487b lim: 40 exec/s: 0 rss: 71Mb L: 20/40 MS: 1 ShuffleBytes- 00:06:45.118 [2024-05-15 10:58:42.243510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.243537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.243655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.243672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.243798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:000000f8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.243813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.243940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:14000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.243954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.118 #40 NEW cov: 12051 ft: 14509 corp: 17/520b lim: 40 exec/s: 40 rss: 71Mb L: 33/40 MS: 1 EraseBytes- 00:06:45.118 [2024-05-15 10:58:42.283450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.283476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.283603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.283621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.283743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.283758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.118 #41 NEW cov: 12051 ft: 14531 corp: 18/547b lim: 40 exec/s: 41 rss: 71Mb L: 27/40 MS: 1 ChangeBit- 00:06:45.118 [2024-05-15 10:58:42.323737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.323762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.323882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.323898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.324022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.324039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.324156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.324171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.118 #42 NEW cov: 12051 ft: 14556 corp: 19/581b lim: 40 exec/s: 42 rss: 71Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:06:45.118 [2024-05-15 10:58:42.373940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.373966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.374088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.374104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.374218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.374232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.118 [2024-05-15 10:58:42.374357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:f8000000 cdw11:00140000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.118 [2024-05-15 10:58:42.374374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.377 #43 NEW cov: 12051 ft: 14559 corp: 20/619b lim: 40 exec/s: 43 rss: 72Mb L: 38/40 MS: 1 ChangeBinInt- 00:06:45.377 [2024-05-15 10:58:42.424303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.424327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.424444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.424458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.424586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.424601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.424715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00008000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.424729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.424841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.424857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.377 #44 NEW cov: 12051 ft: 14593 corp: 21/659b lim: 40 exec/s: 44 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:06:45.377 [2024-05-15 10:58:42.464169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.464194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.464320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.464335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.464458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000f800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.464476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.464596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.464612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.377 #45 NEW cov: 12051 ft: 14655 corp: 22/697b lim: 40 exec/s: 45 rss: 72Mb L: 38/40 MS: 1 CopyPart- 00:06:45.377 [2024-05-15 10:58:42.504309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.377 [2024-05-15 10:58:42.504335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.377 [2024-05-15 10:58:42.504523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.504541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.504662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.504678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.504799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:f8000000 cdw11:00147700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.504818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.378 #46 NEW cov: 12051 ft: 14710 corp: 23/735b lim: 40 exec/s: 46 rss: 72Mb L: 38/40 MS: 1 ChangeByte- 00:06:45.378 [2024-05-15 10:58:42.554707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.554732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.554858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.554874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.555002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.555016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.555137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.555151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.555279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.555295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.378 #47 NEW cov: 12051 ft: 14745 corp: 24/775b lim: 40 exec/s: 47 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:06:45.378 [2024-05-15 10:58:42.594835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.594861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.594991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.595008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.595129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fa000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.595146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.595280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000f800 cdw11:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.595297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.595424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.595442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.378 #48 NEW cov: 12051 ft: 14758 corp: 25/815b lim: 40 exec/s: 48 rss: 72Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:45.378 [2024-05-15 10:58:42.634953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.634979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.635102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.635117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.635241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fa000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.635256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.635385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000f800 cdw11:00020014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.635400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.378 [2024-05-15 10:58:42.635533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.378 [2024-05-15 10:58:42.635547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.637 #49 NEW cov: 12051 ft: 14771 corp: 26/855b lim: 40 exec/s: 49 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:06:45.637 [2024-05-15 10:58:42.684841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.684867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.684993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.685008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.685137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.685161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.685284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:f8000000 cdw11:00147700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.685298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.637 #50 NEW cov: 12051 ft: 14793 corp: 27/894b lim: 40 exec/s: 50 rss: 72Mb L: 39/40 MS: 1 CrossOver- 00:06:45.637 [2024-05-15 10:58:42.734943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.734969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.735094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.735110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.735240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.735256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.735385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:f8000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.735399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.637 #51 NEW cov: 12051 ft: 14809 corp: 28/932b lim: 40 exec/s: 51 rss: 72Mb L: 38/40 MS: 1 ShuffleBytes- 00:06:45.637 [2024-05-15 10:58:42.774759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.774785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.774912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.774930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.637 #52 NEW cov: 12051 ft: 14818 corp: 29/952b lim: 40 exec/s: 52 rss: 72Mb L: 20/40 MS: 1 ChangeByte- 00:06:45.637 [2024-05-15 10:58:42.815478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.815504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.815623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.815638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.815755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.815771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.815888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000feff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.815905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.637 [2024-05-15 10:58:42.816029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.816044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.637 #53 NEW cov: 12051 ft: 14829 corp: 30/992b lim: 40 exec/s: 53 rss: 72Mb L: 40/40 MS: 1 CMP- DE: "\376\377\377\377"- 00:06:45.637 [2024-05-15 10:58:42.864849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.637 [2024-05-15 10:58:42.864877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.637 #54 NEW cov: 12051 ft: 15165 corp: 31/1005b lim: 40 exec/s: 54 rss: 72Mb L: 13/40 MS: 1 EraseBytes- 00:06:45.896 [2024-05-15 10:58:42.915356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.915392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:42.915542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.915560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:42.915683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.915698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.896 #55 NEW cov: 12051 ft: 15186 corp: 32/1032b lim: 40 exec/s: 55 rss: 72Mb L: 27/40 MS: 1 ChangeBinInt- 00:06:45.896 [2024-05-15 10:58:42.965994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000b200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.966021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:42.966144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.966171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:42.966298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.966314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:42.966445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.966463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:42.966574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:42.966590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.896 #56 NEW cov: 12051 ft: 15205 corp: 33/1072b lim: 40 exec/s: 56 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:06:45.896 [2024-05-15 10:58:43.015488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.015515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.015639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00001000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.015656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.896 #57 NEW cov: 12051 ft: 15215 corp: 34/1092b lim: 40 exec/s: 57 rss: 73Mb L: 20/40 MS: 1 ChangeBit- 00:06:45.896 [2024-05-15 10:58:43.065950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.065976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.066110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.066131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.066254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000f800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.066270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.066402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00140000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.066418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.896 #58 NEW cov: 12051 ft: 15225 corp: 35/1129b lim: 40 exec/s: 58 rss: 73Mb L: 37/40 MS: 1 CrossOver- 00:06:45.896 [2024-05-15 10:58:43.116239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000e5e5 cdw11:e5e5e500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.116264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.116403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.116420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.116556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.116571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.116704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000a7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.116720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.896 #59 NEW cov: 12051 ft: 15232 corp: 36/1161b lim: 40 exec/s: 59 rss: 73Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:06:45.896 [2024-05-15 10:58:43.156242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.156268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.156421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.156437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.156572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00003f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.156588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.896 [2024-05-15 10:58:43.156722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00f80000 cdw11:00001400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.896 [2024-05-15 10:58:43.156738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.155 #60 NEW cov: 12051 ft: 15246 corp: 37/1200b lim: 40 exec/s: 60 rss: 73Mb L: 39/40 MS: 1 InsertByte- 00:06:46.156 [2024-05-15 10:58:43.196013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.156 [2024-05-15 10:58:43.196040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.156 [2024-05-15 10:58:43.196174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.156 [2024-05-15 10:58:43.196189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.156 #61 NEW cov: 12051 ft: 15249 corp: 38/1220b lim: 40 exec/s: 61 rss: 73Mb L: 20/40 MS: 1 ShuffleBytes- 00:06:46.156 [2024-05-15 10:58:43.235918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.156 [2024-05-15 10:58:43.235945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.156 [2024-05-15 10:58:43.236076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00003b00 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.156 [2024-05-15 10:58:43.236092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.156 [2024-05-15 10:58:43.236214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.156 [2024-05-15 10:58:43.236229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.156 #62 NEW cov: 12051 ft: 15268 corp: 39/1250b lim: 40 exec/s: 31 rss: 73Mb L: 30/40 MS: 1 InsertRepeatedBytes- 00:06:46.156 #62 DONE cov: 12051 ft: 15268 corp: 39/1250b lim: 40 exec/s: 31 rss: 73Mb 00:06:46.156 ###### Recommended dictionary. ###### 00:06:46.156 "\376\377\377\377" # Uses: 0 00:06:46.156 ###### End of recommended dictionary. ###### 00:06:46.156 Done 62 runs in 2 second(s) 00:06:46.156 [2024-05-15 10:58:43.265411] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.156 10:58:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:46.415 [2024-05-15 10:58:43.434894] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:46.415 [2024-05-15 10:58:43.434957] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406027 ] 00:06:46.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.674 [2024-05-15 10:58:43.692632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.674 [2024-05-15 10:58:43.781933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.674 [2024-05-15 10:58:43.841177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.674 [2024-05-15 10:58:43.857128] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:46.674 [2024-05-15 10:58:43.857548] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:46.674 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.674 INFO: Seed: 2668983246 00:06:46.674 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:46.674 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:46.674 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:46.674 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.674 #2 INITED exec/s: 0 rss: 63Mb 00:06:46.674 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.674 This may also happen if the target rejected all inputs we tried so far 00:06:46.674 [2024-05-15 10:58:43.903125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.674 [2024-05-15 10:58:43.903153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.674 [2024-05-15 10:58:43.903216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.674 [2024-05-15 10:58:43.903232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.674 [2024-05-15 10:58:43.903295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.674 [2024-05-15 10:58:43.903312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.241 NEW_FUNC[1/685]: 0x495df0 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:47.241 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.241 #4 NEW cov: 11790 ft: 11787 corp: 2/23b lim: 35 exec/s: 0 rss: 70Mb L: 22/22 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:47.241 [2024-05-15 10:58:44.234652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.234702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.234843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.234877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.241 NEW_FUNC[1/1]: 0x1d80210 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:938 00:06:47.241 #8 NEW cov: 11938 ft: 12685 corp: 3/39b lim: 35 exec/s: 0 rss: 70Mb L: 16/22 MS: 4 ChangeBit-CrossOver-InsertByte-InsertRepeatedBytes- 00:06:47.241 [2024-05-15 10:58:44.284707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.284737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.284877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.284897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.241 #9 NEW cov: 11944 ft: 12960 corp: 4/55b lim: 35 exec/s: 0 rss: 70Mb L: 16/22 MS: 1 CrossOver- 00:06:47.241 [2024-05-15 10:58:44.335113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.335139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.335289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.335315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.335456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.335479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.241 #10 NEW cov: 12029 ft: 13203 corp: 5/77b lim: 35 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 CopyPart- 00:06:47.241 [2024-05-15 10:58:44.384757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.384786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.241 #14 NEW cov: 12029 ft: 14033 corp: 6/90b lim: 35 exec/s: 0 rss: 70Mb L: 13/22 MS: 4 ChangeByte-ShuffleBytes-ChangeByte-CrossOver- 00:06:47.241 [2024-05-15 10:58:44.425384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.425416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.425565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.425588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.425731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.425752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.241 #15 NEW cov: 12029 ft: 14132 corp: 7/114b lim: 35 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 CrossOver- 00:06:47.241 [2024-05-15 10:58:44.475178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.475208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.241 [2024-05-15 10:58:44.475338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.241 [2024-05-15 10:58:44.475353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.241 #21 NEW cov: 12029 ft: 14223 corp: 8/130b lim: 35 exec/s: 0 rss: 70Mb L: 16/24 MS: 1 ChangeBinInt- 00:06:47.500 [2024-05-15 10:58:44.514660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.514691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.500 #22 NEW cov: 12029 ft: 14266 corp: 9/143b lim: 35 exec/s: 0 rss: 71Mb L: 13/24 MS: 1 ChangeBinInt- 00:06:47.500 [2024-05-15 10:58:44.565164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.565195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.500 #23 NEW cov: 12029 ft: 14316 corp: 10/152b lim: 35 exec/s: 0 rss: 71Mb L: 9/24 MS: 1 CrossOver- 00:06:47.500 [2024-05-15 10:58:44.615370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.615406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.500 #24 NEW cov: 12029 ft: 14379 corp: 11/161b lim: 35 exec/s: 0 rss: 71Mb L: 9/24 MS: 1 ChangeBit- 00:06:47.500 [2024-05-15 10:58:44.666126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.666159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.500 [2024-05-15 10:58:44.666293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.666316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.500 [2024-05-15 10:58:44.666461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.666484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.500 #25 NEW cov: 12029 ft: 14410 corp: 12/184b lim: 35 exec/s: 0 rss: 71Mb L: 23/24 MS: 1 CrossOver- 00:06:47.500 [2024-05-15 10:58:44.706407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.706443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.500 [2024-05-15 10:58:44.706577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.706599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.500 [2024-05-15 10:58:44.706730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.706757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.500 [2024-05-15 10:58:44.706882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.500 [2024-05-15 10:58:44.706902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.500 #28 NEW cov: 12029 ft: 14728 corp: 13/217b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:47.501 [2024-05-15 10:58:44.746119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.501 [2024-05-15 10:58:44.746155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.501 [2024-05-15 10:58:44.746285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.501 [2024-05-15 10:58:44.746308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.759 #29 NEW cov: 12029 ft: 14743 corp: 14/234b lim: 35 exec/s: 0 rss: 71Mb L: 17/33 MS: 1 InsertRepeatedBytes- 00:06:47.759 [2024-05-15 10:58:44.785837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.785864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.759 NEW_FUNC[1/3]: 0x4b72b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:47.759 NEW_FUNC[2/3]: 0x119ae90 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1759 00:06:47.759 #30 NEW cov: 12085 ft: 14871 corp: 15/251b lim: 35 exec/s: 0 rss: 71Mb L: 17/33 MS: 1 InsertRepeatedBytes- 00:06:47.759 [2024-05-15 10:58:44.846473] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.846504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.759 #31 NEW cov: 12085 ft: 15023 corp: 16/268b lim: 35 exec/s: 0 rss: 71Mb L: 17/33 MS: 1 ShuffleBytes- 00:06:47.759 [2024-05-15 10:58:44.896426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.896457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.759 [2024-05-15 10:58:44.896594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.896614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.759 [2024-05-15 10:58:44.896745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.896765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.759 #32 NEW cov: 12085 ft: 15055 corp: 17/291b lim: 35 exec/s: 32 rss: 71Mb L: 23/33 MS: 1 CopyPart- 00:06:47.759 [2024-05-15 10:58:44.956630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.956660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.759 [2024-05-15 10:58:44.956792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.956815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.759 #33 NEW cov: 12085 ft: 15172 corp: 18/307b lim: 35 exec/s: 33 rss: 71Mb L: 16/33 MS: 1 ChangeBit- 00:06:47.759 [2024-05-15 10:58:44.997504] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.997537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.759 [2024-05-15 10:58:44.997669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.997687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.759 [2024-05-15 10:58:44.997818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.997835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.759 [2024-05-15 10:58:44.997954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.759 [2024-05-15 10:58:44.997971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.018 #34 NEW cov: 12085 ft: 15247 corp: 19/342b lim: 35 exec/s: 34 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:48.018 [2024-05-15 10:58:45.047211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.047245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.047401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.047426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.047563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.047579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.018 #35 NEW cov: 12085 ft: 15308 corp: 20/366b lim: 35 exec/s: 35 rss: 71Mb L: 24/35 MS: 1 CMP- DE: "\304\301.\002\000\000\000\000"- 00:06:48.018 [2024-05-15 10:58:45.087021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.087047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.018 #36 NEW cov: 12085 ft: 15320 corp: 21/383b lim: 35 exec/s: 36 rss: 71Mb L: 17/35 MS: 1 ShuffleBytes- 00:06:48.018 [2024-05-15 10:58:45.127384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.127415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.127534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.127555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.127677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.127692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.018 #37 NEW cov: 12085 ft: 15411 corp: 22/404b lim: 35 exec/s: 37 rss: 71Mb L: 21/35 MS: 1 PersAutoDict- DE: "\304\301.\002\000\000\000\000"- 00:06:48.018 [2024-05-15 10:58:45.167206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.167237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.167371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.167390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.018 #38 NEW cov: 12085 ft: 15435 corp: 23/420b lim: 35 exec/s: 38 rss: 71Mb L: 16/35 MS: 1 ChangeByte- 00:06:48.018 [2024-05-15 10:58:45.217715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.217748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.217883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.217898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.218038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.218053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.018 #39 NEW cov: 12085 ft: 15461 corp: 24/443b lim: 35 exec/s: 39 rss: 72Mb L: 23/35 MS: 1 ShuffleBytes- 00:06:48.018 [2024-05-15 10:58:45.267626] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.267657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.018 [2024-05-15 10:58:45.267802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.018 [2024-05-15 10:58:45.267821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.277 #40 NEW cov: 12085 ft: 15515 corp: 25/459b lim: 35 exec/s: 40 rss: 72Mb L: 16/35 MS: 1 PersAutoDict- DE: "\304\301.\002\000\000\000\000"- 00:06:48.277 [2024-05-15 10:58:45.307478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.307509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.277 #41 NEW cov: 12085 ft: 15554 corp: 26/469b lim: 35 exec/s: 41 rss: 72Mb L: 10/35 MS: 1 InsertByte- 00:06:48.277 [2024-05-15 10:58:45.358118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.358151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.277 [2024-05-15 10:58:45.358287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.358313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.277 [2024-05-15 10:58:45.358444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.358463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.277 #42 NEW cov: 12085 ft: 15561 corp: 27/493b lim: 35 exec/s: 42 rss: 72Mb L: 24/35 MS: 1 CrossOver- 00:06:48.277 [2024-05-15 10:58:45.407996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.408032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.277 [2024-05-15 10:58:45.408156] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.408171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.277 #43 NEW cov: 12085 ft: 15565 corp: 28/510b lim: 35 exec/s: 43 rss: 72Mb L: 17/35 MS: 1 InsertByte- 00:06:48.277 [2024-05-15 10:58:45.458359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.458396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.277 [2024-05-15 10:58:45.458549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.458567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.277 [2024-05-15 10:58:45.458699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.458724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.277 #44 NEW cov: 12085 ft: 15596 corp: 29/534b lim: 35 exec/s: 44 rss: 72Mb L: 24/35 MS: 1 ChangeBinInt- 00:06:48.277 [2024-05-15 10:58:45.508239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.508272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.277 [2024-05-15 10:58:45.508405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.277 [2024-05-15 10:58:45.508422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.277 #45 NEW cov: 12085 ft: 15613 corp: 30/550b lim: 35 exec/s: 45 rss: 72Mb L: 16/35 MS: 1 ChangeBit- 00:06:48.536 [2024-05-15 10:58:45.548653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.548685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.536 [2024-05-15 10:58:45.548824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.548841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.536 [2024-05-15 10:58:45.548967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.548991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.536 #46 NEW cov: 12085 ft: 15618 corp: 31/573b lim: 35 exec/s: 46 rss: 72Mb L: 23/35 MS: 1 InsertRepeatedBytes- 00:06:48.536 [2024-05-15 10:58:45.588613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.588644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.536 [2024-05-15 10:58:45.588779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.588794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.536 #47 NEW cov: 12085 ft: 15630 corp: 32/589b lim: 35 exec/s: 47 rss: 72Mb L: 16/35 MS: 1 ShuffleBytes- 00:06:48.536 [2024-05-15 10:58:45.628768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.628794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.536 #48 NEW cov: 12085 ft: 15656 corp: 33/606b lim: 35 exec/s: 48 rss: 72Mb L: 17/35 MS: 1 ShuffleBytes- 00:06:48.536 [2024-05-15 10:58:45.678430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.678456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.536 #49 NEW cov: 12085 ft: 15662 corp: 34/623b lim: 35 exec/s: 49 rss: 72Mb L: 17/35 MS: 1 CMP- DE: "\003\000"- 00:06:48.536 [2024-05-15 10:58:45.719392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.719414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.536 [2024-05-15 10:58:45.719565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.719582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.536 [2024-05-15 10:58:45.719709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000059 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.719726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.536 [2024-05-15 10:58:45.719857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000059 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.719875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.536 #50 NEW cov: 12085 ft: 15673 corp: 35/651b lim: 35 exec/s: 50 rss: 73Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:06:48.536 [2024-05-15 10:58:45.768787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.536 [2024-05-15 10:58:45.768816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.536 #51 NEW cov: 12085 ft: 15691 corp: 36/661b lim: 35 exec/s: 51 rss: 73Mb L: 10/35 MS: 1 ChangeBit- 00:06:48.795 [2024-05-15 10:58:45.819431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.819459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.795 [2024-05-15 10:58:45.819586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.819602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.795 #52 NEW cov: 12085 ft: 15705 corp: 37/686b lim: 35 exec/s: 52 rss: 73Mb L: 25/35 MS: 1 PersAutoDict- DE: "\304\301.\002\000\000\000\000"- 00:06:48.795 [2024-05-15 10:58:45.859831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.859858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.795 [2024-05-15 10:58:45.859975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000092 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.860014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.795 [2024-05-15 10:58:45.860133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000092 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.860155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.795 [2024-05-15 10:58:45.860271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000092 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.860290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.795 #53 NEW cov: 12085 ft: 15714 corp: 38/718b lim: 35 exec/s: 53 rss: 73Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:06:48.795 [2024-05-15 10:58:45.899571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.795 [2024-05-15 10:58:45.899598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.795 #54 NEW cov: 12085 ft: 15741 corp: 39/735b lim: 35 exec/s: 27 rss: 73Mb L: 17/35 MS: 1 ChangeBinInt- 00:06:48.795 #54 DONE cov: 12085 ft: 15741 corp: 39/735b lim: 35 exec/s: 27 rss: 73Mb 00:06:48.795 ###### Recommended dictionary. ###### 00:06:48.795 "\304\301.\002\000\000\000\000" # Uses: 3 00:06:48.795 "\003\000" # Uses: 0 00:06:48.796 ###### End of recommended dictionary. ###### 00:06:48.796 Done 54 runs in 2 second(s) 00:06:48.796 [2024-05-15 10:58:45.920199] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:48.796 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.055 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.055 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.055 10:58:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:49.055 [2024-05-15 10:58:46.090374] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:49.055 [2024-05-15 10:58:46.090448] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406354 ] 00:06:49.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.314 [2024-05-15 10:58:46.356452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.314 [2024-05-15 10:58:46.445221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.314 [2024-05-15 10:58:46.504775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.314 [2024-05-15 10:58:46.520724] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:49.314 [2024-05-15 10:58:46.521145] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:49.314 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.314 INFO: Seed: 1036002797 00:06:49.314 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:49.314 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:49.314 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:49.314 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.314 #2 INITED exec/s: 0 rss: 63Mb 00:06:49.314 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.314 This may also happen if the target rejected all inputs we tried so far 00:06:49.314 [2024-05-15 10:58:46.570040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.314 [2024-05-15 10:58:46.570069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.831 NEW_FUNC[1/686]: 0x497330 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:49.831 NEW_FUNC[2/686]: 0x4b72b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:49.831 #11 NEW cov: 11803 ft: 11794 corp: 2/15b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 4 ShuffleBytes-CopyPart-CMP-CMP- DE: "\377\377\377\377"-"\021\340\030sE\373\205\000"- 00:06:49.831 [2024-05-15 10:58:46.901880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.831 [2024-05-15 10:58:46.901929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.831 [2024-05-15 10:58:46.902104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.831 [2024-05-15 10:58:46.902130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.831 #12 NEW cov: 11933 ft: 12769 corp: 3/29b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 ChangeBit- 00:06:49.831 [2024-05-15 10:58:46.962013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.831 [2024-05-15 10:58:46.962040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.831 #18 NEW cov: 11939 ft: 12989 corp: 4/43b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 ChangeBit- 00:06:49.831 [2024-05-15 10:58:47.012080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.831 [2024-05-15 10:58:47.012107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.831 [2024-05-15 10:58:47.012246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.831 [2024-05-15 10:58:47.012267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.831 #19 NEW cov: 12024 ft: 13225 corp: 5/57b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:49.831 [2024-05-15 10:58:47.072305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.831 [2024-05-15 10:58:47.072333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.090 #30 NEW cov: 12024 ft: 13285 corp: 6/71b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 ChangeByte- 00:06:50.090 [2024-05-15 10:58:47.132425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.090 [2024-05-15 10:58:47.132455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.090 #31 NEW cov: 12024 ft: 13389 corp: 7/85b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:50.090 [2024-05-15 10:58:47.182675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.090 [2024-05-15 10:58:47.182703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.090 #32 NEW cov: 12024 ft: 13433 corp: 8/99b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBinInt- 00:06:50.090 [2024-05-15 10:58:47.242905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.090 [2024-05-15 10:58:47.242943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.090 #33 NEW cov: 12024 ft: 13460 corp: 9/113b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBit- 00:06:50.090 [2024-05-15 10:58:47.303014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.090 [2024-05-15 10:58:47.303043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.090 #34 NEW cov: 12024 ft: 13579 corp: 10/127b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:50.090 [2024-05-15 10:58:47.353158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.090 [2024-05-15 10:58:47.353185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.090 [2024-05-15 10:58:47.353347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000077a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.090 [2024-05-15 10:58:47.353373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.349 #35 NEW cov: 12024 ft: 13600 corp: 11/141b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBinInt- 00:06:50.349 [2024-05-15 10:58:47.403317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NON OPERATIONAL POWER STATE CONFIG cid:5 cdw10:00000711 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.349 [2024-05-15 10:58:47.403345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.349 #36 NEW cov: 12024 ft: 13614 corp: 12/155b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ShuffleBytes- 00:06:50.349 [2024-05-15 10:58:47.463519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.349 [2024-05-15 10:58:47.463548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.349 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:50.349 #37 NEW cov: 12047 ft: 13677 corp: 13/169b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBinInt- 00:06:50.349 [2024-05-15 10:58:47.513664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000185 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.349 [2024-05-15 10:58:47.513692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.349 #38 NEW cov: 12047 ft: 13770 corp: 14/184b lim: 35 exec/s: 0 rss: 71Mb L: 15/15 MS: 1 InsertByte- 00:06:50.349 #39 NEW cov: 12047 ft: 13993 corp: 15/194b lim: 35 exec/s: 39 rss: 71Mb L: 10/15 MS: 1 EraseBytes- 00:06:50.608 [2024-05-15 10:58:47.623749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.623778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.608 #40 NEW cov: 12047 ft: 14050 corp: 16/204b lim: 35 exec/s: 40 rss: 71Mb L: 10/15 MS: 1 ChangeByte- 00:06:50.608 [2024-05-15 10:58:47.684600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.684629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.684779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.684798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.684952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000024d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.684970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.685117] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000004fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.685136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.608 #41 NEW cov: 12047 ft: 14523 corp: 17/233b lim: 35 exec/s: 41 rss: 71Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:50.608 [2024-05-15 10:58:47.734777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.734805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.734960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.734980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.735131] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.735146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.735296] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000745 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.735313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.608 #42 NEW cov: 12047 ft: 14582 corp: 18/263b lim: 35 exec/s: 42 rss: 71Mb L: 30/30 MS: 1 InsertByte- 00:06:50.608 [2024-05-15 10:58:47.795051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.795080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.795244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.795260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.795402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000044d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.795419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.608 [2024-05-15 10:58:47.795574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000273 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.795592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.608 #43 NEW cov: 12047 ft: 14674 corp: 19/294b lim: 35 exec/s: 43 rss: 72Mb L: 31/31 MS: 1 InsertByte- 00:06:50.608 [2024-05-15 10:58:47.854441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.608 [2024-05-15 10:58:47.854471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.867 #44 NEW cov: 12047 ft: 14766 corp: 20/304b lim: 35 exec/s: 44 rss: 72Mb L: 10/31 MS: 1 CMP- DE: "\000\037"- 00:06:50.867 [2024-05-15 10:58:47.915043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:47.915071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.867 #45 NEW cov: 12047 ft: 14774 corp: 21/319b lim: 35 exec/s: 45 rss: 72Mb L: 15/31 MS: 1 ShuffleBytes- 00:06:50.867 [2024-05-15 10:58:47.975576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:47.975605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:47.975766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:47.975785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:47.975933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000044d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:47.975951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:47.976104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000273 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:47.976121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.867 #46 NEW cov: 12047 ft: 14795 corp: 22/350b lim: 35 exec/s: 46 rss: 72Mb L: 31/31 MS: 1 CopyPart- 00:06:50.867 [2024-05-15 10:58:48.025659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:48.025687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:48.025833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:48.025852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:48.026001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000044d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:48.026022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:48.026167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000273 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:48.026184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.867 #47 NEW cov: 12047 ft: 14804 corp: 23/382b lim: 35 exec/s: 47 rss: 72Mb L: 32/32 MS: 1 InsertByte- 00:06:50.867 [2024-05-15 10:58:48.085374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:48.085405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.867 [2024-05-15 10:58:48.085560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.867 [2024-05-15 10:58:48.085579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.867 #48 NEW cov: 12047 ft: 14809 corp: 24/396b lim: 35 exec/s: 48 rss: 72Mb L: 14/32 MS: 1 CrossOver- 00:06:51.126 [2024-05-15 10:58:48.135604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.135630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.126 #49 NEW cov: 12047 ft: 14826 corp: 25/410b lim: 35 exec/s: 49 rss: 72Mb L: 14/32 MS: 1 ChangeBinInt- 00:06:51.126 [2024-05-15 10:58:48.186226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.186254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.126 [2024-05-15 10:58:48.186417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000020d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.186436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.126 [2024-05-15 10:58:48.186585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000044d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.186606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.126 [2024-05-15 10:58:48.186737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000273 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.186754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.126 #50 NEW cov: 12047 ft: 14857 corp: 26/441b lim: 35 exec/s: 50 rss: 72Mb L: 31/32 MS: 1 ChangeBit- 00:06:51.126 [2024-05-15 10:58:48.235974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.236005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.126 #51 NEW cov: 12047 ft: 14864 corp: 27/455b lim: 35 exec/s: 51 rss: 72Mb L: 14/32 MS: 1 CopyPart- 00:06:51.126 [2024-05-15 10:58:48.296117] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000084 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.296146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.126 [2024-05-15 10:58:48.296294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000077a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.296313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.126 #52 NEW cov: 12047 ft: 14919 corp: 28/469b lim: 35 exec/s: 52 rss: 72Mb L: 14/32 MS: 1 ChangeByte- 00:06:51.126 [2024-05-15 10:58:48.356212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.356238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.126 [2024-05-15 10:58:48.356385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.126 [2024-05-15 10:58:48.356403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.126 #53 NEW cov: 12047 ft: 14927 corp: 29/483b lim: 35 exec/s: 53 rss: 72Mb L: 14/32 MS: 1 CMP- DE: "\010\000"- 00:06:51.385 [2024-05-15 10:58:48.416236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.385 [2024-05-15 10:58:48.416262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.385 #59 NEW cov: 12047 ft: 14960 corp: 30/497b lim: 35 exec/s: 59 rss: 72Mb L: 14/32 MS: 1 ChangeByte- 00:06:51.385 [2024-05-15 10:58:48.466606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.385 [2024-05-15 10:58:48.466632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.385 #60 NEW cov: 12047 ft: 14976 corp: 31/514b lim: 35 exec/s: 60 rss: 72Mb L: 17/32 MS: 1 CopyPart- 00:06:51.385 [2024-05-15 10:58:48.526775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.385 [2024-05-15 10:58:48.526802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.385 [2024-05-15 10:58:48.526944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.385 [2024-05-15 10:58:48.526960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.385 #61 NEW cov: 12047 ft: 14998 corp: 32/528b lim: 35 exec/s: 30 rss: 72Mb L: 14/32 MS: 1 ChangeBit- 00:06:51.385 #61 DONE cov: 12047 ft: 14998 corp: 32/528b lim: 35 exec/s: 30 rss: 72Mb 00:06:51.385 ###### Recommended dictionary. ###### 00:06:51.385 "\377\377\377\377" # Uses: 2 00:06:51.385 "\021\340\030sE\373\205\000" # Uses: 0 00:06:51.385 "\000\000\000\000" # Uses: 0 00:06:51.385 "\000\037" # Uses: 0 00:06:51.385 "\010\000" # Uses: 0 00:06:51.385 ###### End of recommended dictionary. ###### 00:06:51.385 Done 61 runs in 2 second(s) 00:06:51.385 [2024-05-15 10:58:48.555500] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.643 10:58:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:51.643 [2024-05-15 10:58:48.722987] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:51.643 [2024-05-15 10:58:48.723049] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406873 ] 00:06:51.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.901 [2024-05-15 10:58:48.979570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.901 [2024-05-15 10:58:49.072424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.901 [2024-05-15 10:58:49.132390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.901 [2024-05-15 10:58:49.148338] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:51.902 [2024-05-15 10:58:49.148776] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:51.902 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.902 INFO: Seed: 3666003711 00:06:52.159 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:52.160 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:52.160 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:52.160 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.160 #2 INITED exec/s: 0 rss: 64Mb 00:06:52.160 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.160 This may also happen if the target rejected all inputs we tried so far 00:06:52.160 [2024-05-15 10:58:49.214164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.160 [2024-05-15 10:58:49.214194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.160 [2024-05-15 10:58:49.214227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.160 [2024-05-15 10:58:49.214241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.160 [2024-05-15 10:58:49.214293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.160 [2024-05-15 10:58:49.214309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.160 [2024-05-15 10:58:49.214362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.160 [2024-05-15 10:58:49.214385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.419 NEW_FUNC[1/686]: 0x4987e0 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:52.419 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.419 #14 NEW cov: 11893 ft: 11870 corp: 2/102b lim: 105 exec/s: 0 rss: 70Mb L: 101/101 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:52.419 [2024-05-15 10:58:49.545016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.545062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.545131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.545154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.545217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:731313934784931366 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.545237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.545300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.545321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.419 #20 NEW cov: 12023 ft: 12584 corp: 3/203b lim: 105 exec/s: 0 rss: 70Mb L: 101/101 MS: 1 CrossOver- 00:06:52.419 [2024-05-15 10:58:49.595082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.595112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.595146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.595161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.595214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.595230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.595282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.595296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.419 #26 NEW cov: 12029 ft: 12898 corp: 4/304b lim: 105 exec/s: 0 rss: 70Mb L: 101/101 MS: 1 ShuffleBytes- 00:06:52.419 [2024-05-15 10:58:49.635124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.635151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.635199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.635217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.635274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.635291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.635345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.635359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.419 #27 NEW cov: 12114 ft: 13102 corp: 5/406b lim: 105 exec/s: 0 rss: 70Mb L: 102/102 MS: 1 CrossOver- 00:06:52.419 [2024-05-15 10:58:49.675273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.675299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.675343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.675357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.675428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.675444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.419 [2024-05-15 10:58:49.675498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.419 [2024-05-15 10:58:49.675514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.678 #28 NEW cov: 12114 ft: 13249 corp: 6/507b lim: 105 exec/s: 0 rss: 71Mb L: 101/102 MS: 1 CrossOver- 00:06:52.678 [2024-05-15 10:58:49.725442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.725469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.725516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.725531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.725583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.725598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.725651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.725666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.678 #29 NEW cov: 12114 ft: 13282 corp: 7/608b lim: 105 exec/s: 0 rss: 71Mb L: 101/102 MS: 1 ChangeBinInt- 00:06:52.678 [2024-05-15 10:58:49.765498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.765531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.765559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.765575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.765629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:731313934784931366 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.765645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.765697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.765711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.678 #30 NEW cov: 12114 ft: 13390 corp: 8/709b lim: 105 exec/s: 0 rss: 71Mb L: 101/102 MS: 1 CopyPart- 00:06:52.678 [2024-05-15 10:58:49.815634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.815661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.815706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.815721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.815777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.815792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.815848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.815863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.678 #36 NEW cov: 12114 ft: 13432 corp: 9/807b lim: 105 exec/s: 0 rss: 71Mb L: 98/102 MS: 1 EraseBytes- 00:06:52.678 [2024-05-15 10:58:49.865817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.865845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.865891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.865906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.865960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.865976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.866028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.866046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.678 #37 NEW cov: 12114 ft: 13451 corp: 10/905b lim: 105 exec/s: 0 rss: 71Mb L: 98/102 MS: 1 ShuffleBytes- 00:06:52.678 [2024-05-15 10:58:49.915700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.915728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.678 [2024-05-15 10:58:49.915777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.678 [2024-05-15 10:58:49.915792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.937 #38 NEW cov: 12114 ft: 14008 corp: 11/967b lim: 105 exec/s: 0 rss: 71Mb L: 62/102 MS: 1 EraseBytes- 00:06:52.937 [2024-05-15 10:58:49.966113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:49.966140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:49.966189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:49.966202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:49.966255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:49.966271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:49.966326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:49.966342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.937 #39 NEW cov: 12114 ft: 14022 corp: 12/1068b lim: 105 exec/s: 0 rss: 71Mb L: 101/102 MS: 1 ChangeBit- 00:06:52.937 [2024-05-15 10:58:50.006210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.006238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.006279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.006294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.006348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.006364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.006423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913584 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.006438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.937 #40 NEW cov: 12114 ft: 14090 corp: 13/1169b lim: 105 exec/s: 0 rss: 71Mb L: 101/102 MS: 1 ChangeBinInt- 00:06:52.937 [2024-05-15 10:58:50.066278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.066311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.066361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.066376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.066438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.066455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.937 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:52.937 #41 NEW cov: 12137 ft: 14393 corp: 14/1251b lim: 105 exec/s: 0 rss: 71Mb L: 82/102 MS: 1 InsertRepeatedBytes- 00:06:52.937 [2024-05-15 10:58:50.116537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.116568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.116604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.116620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.116675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.116691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.116747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.116761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.937 #42 NEW cov: 12137 ft: 14419 corp: 15/1352b lim: 105 exec/s: 0 rss: 71Mb L: 101/102 MS: 1 ShuffleBytes- 00:06:52.937 [2024-05-15 10:58:50.166532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.166561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.166596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.166612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.937 [2024-05-15 10:58:50.166666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.937 [2024-05-15 10:58:50.166682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.937 #43 NEW cov: 12137 ft: 14440 corp: 16/1434b lim: 105 exec/s: 43 rss: 72Mb L: 82/102 MS: 1 ChangeBinInt- 00:06:53.195 [2024-05-15 10:58:50.216794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.195 [2024-05-15 10:58:50.216825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.216866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.216881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.216935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.216950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.217006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.217022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.196 #44 NEW cov: 12137 ft: 14480 corp: 17/1536b lim: 105 exec/s: 44 rss: 72Mb L: 102/102 MS: 1 CrossOver- 00:06:53.196 [2024-05-15 10:58:50.256996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.257024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.257079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.257095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.257148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.257164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.257217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.257232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.257286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.257303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:53.196 #45 NEW cov: 12137 ft: 14562 corp: 18/1641b lim: 105 exec/s: 45 rss: 72Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:06:53.196 [2024-05-15 10:58:50.297001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.297029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.297076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.297091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.297144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.297161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.297218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.297234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.196 #46 NEW cov: 12137 ft: 14621 corp: 19/1740b lim: 105 exec/s: 46 rss: 72Mb L: 99/105 MS: 1 InsertByte- 00:06:53.196 [2024-05-15 10:58:50.336897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.336925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.336960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.336975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.196 #47 NEW cov: 12137 ft: 14679 corp: 20/1791b lim: 105 exec/s: 47 rss: 72Mb L: 51/105 MS: 1 EraseBytes- 00:06:53.196 [2024-05-15 10:58:50.387306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.387334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.387390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.387405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.387459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.387476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.387531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.387548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.196 #48 NEW cov: 12137 ft: 14772 corp: 21/1893b lim: 105 exec/s: 48 rss: 72Mb L: 102/105 MS: 1 ChangeBinInt- 00:06:53.196 [2024-05-15 10:58:50.427398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.427426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.427478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.427493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.427548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.427564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.196 [2024-05-15 10:58:50.427617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.196 [2024-05-15 10:58:50.427632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.196 #49 NEW cov: 12137 ft: 14794 corp: 22/1990b lim: 105 exec/s: 49 rss: 72Mb L: 97/105 MS: 1 EraseBytes- 00:06:53.455 [2024-05-15 10:58:50.477424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.477453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.477492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.477508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.477565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.477582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.455 #50 NEW cov: 12137 ft: 14851 corp: 23/2072b lim: 105 exec/s: 50 rss: 72Mb L: 82/105 MS: 1 ChangeBit- 00:06:53.455 [2024-05-15 10:58:50.517645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.517672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.517717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.517732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.517784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.517801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.517858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.517873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.455 #51 NEW cov: 12137 ft: 14863 corp: 24/2174b lim: 105 exec/s: 51 rss: 72Mb L: 102/105 MS: 1 ChangeBinInt- 00:06:53.455 [2024-05-15 10:58:50.557762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.557790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.557839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.557854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.557908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.557924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.557988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.558006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.455 #52 NEW cov: 12137 ft: 14900 corp: 25/2273b lim: 105 exec/s: 52 rss: 72Mb L: 99/105 MS: 1 ChangeByte- 00:06:53.455 [2024-05-15 10:58:50.607906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.607934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.607984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.608001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.608053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.608069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.608122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.608137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.455 #53 NEW cov: 12137 ft: 14915 corp: 26/2372b lim: 105 exec/s: 53 rss: 72Mb L: 99/105 MS: 1 ShuffleBytes- 00:06:53.455 [2024-05-15 10:58:50.648029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.648057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.648103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.648118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.648174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.648191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.648245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913584 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.648261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.455 #54 NEW cov: 12137 ft: 14921 corp: 27/2473b lim: 105 exec/s: 54 rss: 72Mb L: 101/105 MS: 1 ShuffleBytes- 00:06:53.455 [2024-05-15 10:58:50.688131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.688159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.688207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.688224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.455 [2024-05-15 10:58:50.688278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.455 [2024-05-15 10:58:50.688298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.456 [2024-05-15 10:58:50.688355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.456 [2024-05-15 10:58:50.688372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.456 #55 NEW cov: 12137 ft: 14946 corp: 28/2560b lim: 105 exec/s: 55 rss: 72Mb L: 87/105 MS: 1 EraseBytes- 00:06:53.714 [2024-05-15 10:58:50.738258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.738286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.738333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:56026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.738346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.738402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:731313934784931366 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.738418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.738469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.738484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.714 #56 NEW cov: 12137 ft: 14965 corp: 29/2661b lim: 105 exec/s: 56 rss: 72Mb L: 101/105 MS: 1 ChangeBinInt- 00:06:53.714 [2024-05-15 10:58:50.778271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.778299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.778331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.778345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.778406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.778422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.714 #57 NEW cov: 12137 ft: 14985 corp: 30/2743b lim: 105 exec/s: 57 rss: 72Mb L: 82/105 MS: 1 ShuffleBytes- 00:06:53.714 [2024-05-15 10:58:50.828341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.828370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.828411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.828427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.828481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.828501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.714 #58 NEW cov: 12137 ft: 15025 corp: 31/2808b lim: 105 exec/s: 58 rss: 72Mb L: 65/105 MS: 1 EraseBytes- 00:06:53.714 [2024-05-15 10:58:50.868630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.714 [2024-05-15 10:58:50.868658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.714 [2024-05-15 10:58:50.868702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.868717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.868770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:731313934784931366 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.868785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.868840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.868856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.715 #59 NEW cov: 12137 ft: 15054 corp: 32/2910b lim: 105 exec/s: 59 rss: 72Mb L: 102/105 MS: 1 InsertByte- 00:06:53.715 [2024-05-15 10:58:50.908743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.908771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.908811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913599 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.908827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.908881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.908898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.908953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.908967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.715 #60 NEW cov: 12137 ft: 15061 corp: 33/3012b lim: 105 exec/s: 60 rss: 72Mb L: 102/105 MS: 1 InsertByte- 00:06:53.715 [2024-05-15 10:58:50.948937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.948966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.949018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.949032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.949087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.949106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.949160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.949174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.715 [2024-05-15 10:58:50.949228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:2748927353322612262 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.715 [2024-05-15 10:58:50.949243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:53.715 #61 NEW cov: 12137 ft: 15092 corp: 34/3117b lim: 105 exec/s: 61 rss: 73Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:06:53.974 [2024-05-15 10:58:50.988850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:50.988877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:50.988910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:50.988925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:50.988981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:50.988997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.974 #62 NEW cov: 12137 ft: 15114 corp: 35/3199b lim: 105 exec/s: 62 rss: 73Mb L: 82/105 MS: 1 CrossOver- 00:06:53.974 [2024-05-15 10:58:51.039193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.039221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.039273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.039289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.039345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.039362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.039424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.039440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.039494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:2748926567846913757 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.039508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:53.974 #63 NEW cov: 12137 ft: 15134 corp: 36/3304b lim: 105 exec/s: 63 rss: 73Mb L: 105/105 MS: 1 CrossOver- 00:06:53.974 [2024-05-15 10:58:51.079091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.079117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.079150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073698213887 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.079165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.079219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.079236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.974 #64 NEW cov: 12137 ft: 15145 corp: 37/3386b lim: 105 exec/s: 64 rss: 73Mb L: 82/105 MS: 1 ChangeBinInt- 00:06:53.974 [2024-05-15 10:58:51.119098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.119126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.119170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.119186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.974 #65 NEW cov: 12137 ft: 15165 corp: 38/3438b lim: 105 exec/s: 65 rss: 73Mb L: 52/105 MS: 1 InsertByte- 00:06:53.974 [2024-05-15 10:58:51.169354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.169385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.169430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446743330680209407 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.169445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.169499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.169515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.974 #66 NEW cov: 12137 ft: 15182 corp: 39/3520b lim: 105 exec/s: 66 rss: 73Mb L: 82/105 MS: 1 ChangeBinInt- 00:06:53.974 [2024-05-15 10:58:51.209534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.209560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.209607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.209621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.209673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.209689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.974 [2024-05-15 10:58:51.209745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.974 [2024-05-15 10:58:51.209759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.974 #67 NEW cov: 12137 ft: 15204 corp: 40/3607b lim: 105 exec/s: 33 rss: 73Mb L: 87/105 MS: 1 CopyPart- 00:06:53.974 #67 DONE cov: 12137 ft: 15204 corp: 40/3607b lim: 105 exec/s: 33 rss: 73Mb 00:06:53.974 Done 67 runs in 2 second(s) 00:06:53.974 [2024-05-15 10:58:51.239111] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.233 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.234 10:58:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:54.234 [2024-05-15 10:58:51.409974] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:54.234 [2024-05-15 10:58:51.410042] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407408 ] 00:06:54.234 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.492 [2024-05-15 10:58:51.662175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.492 [2024-05-15 10:58:51.746868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.750 [2024-05-15 10:58:51.806152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.750 [2024-05-15 10:58:51.822086] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:54.750 [2024-05-15 10:58:51.822509] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:54.750 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.750 INFO: Seed: 2042038823 00:06:54.750 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:54.750 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:54.750 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:54.750 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.750 #2 INITED exec/s: 0 rss: 63Mb 00:06:54.750 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.750 This may also happen if the target rejected all inputs we tried so far 00:06:54.750 [2024-05-15 10:58:51.870853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.750 [2024-05-15 10:58:51.870884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.008 NEW_FUNC[1/687]: 0x49bb60 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:55.008 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.008 #7 NEW cov: 11914 ft: 11913 corp: 2/43b lim: 120 exec/s: 0 rss: 70Mb L: 42/42 MS: 5 CrossOver-InsertByte-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:55.008 [2024-05-15 10:58:52.201804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.008 [2024-05-15 10:58:52.201839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.008 [2024-05-15 10:58:52.201893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.008 [2024-05-15 10:58:52.201911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.008 [2024-05-15 10:58:52.201963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.009 [2024-05-15 10:58:52.201978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.009 #8 NEW cov: 12044 ft: 13329 corp: 3/119b lim: 120 exec/s: 0 rss: 70Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:06:55.009 [2024-05-15 10:58:52.251576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.009 [2024-05-15 10:58:52.251606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 #9 NEW cov: 12050 ft: 13489 corp: 4/158b lim: 120 exec/s: 0 rss: 70Mb L: 39/76 MS: 1 EraseBytes- 00:06:55.268 [2024-05-15 10:58:52.301693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.301720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 #10 NEW cov: 12135 ft: 13868 corp: 5/198b lim: 120 exec/s: 0 rss: 70Mb L: 40/76 MS: 1 InsertByte- 00:06:55.268 [2024-05-15 10:58:52.352353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.352386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.352433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.352448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.352500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446742978492891135 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.352520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.352574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.352588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.268 #11 NEW cov: 12135 ft: 14313 corp: 6/297b lim: 120 exec/s: 0 rss: 70Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:06:55.268 [2024-05-15 10:58:52.392439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.392468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.392512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.392527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.392580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.392597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.392650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.392665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.268 #17 NEW cov: 12135 ft: 14454 corp: 7/405b lim: 120 exec/s: 0 rss: 70Mb L: 108/108 MS: 1 CopyPart- 00:06:55.268 [2024-05-15 10:58:52.432082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.432111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 #18 NEW cov: 12135 ft: 14494 corp: 8/447b lim: 120 exec/s: 0 rss: 70Mb L: 42/108 MS: 1 CopyPart- 00:06:55.268 [2024-05-15 10:58:52.472524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069582423177 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.472552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.472587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.472602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.472656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.472672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.268 #22 NEW cov: 12135 ft: 14612 corp: 9/528b lim: 120 exec/s: 0 rss: 71Mb L: 81/108 MS: 4 InsertByte-CMP-ChangeBinInt-InsertRepeatedBytes- DE: "\001\000"- 00:06:55.268 [2024-05-15 10:58:52.512764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.512791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.512830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.512844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.512899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.512915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.268 [2024-05-15 10:58:52.512967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18444492273895866367 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.268 [2024-05-15 10:58:52.512982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.527 #23 NEW cov: 12135 ft: 14686 corp: 10/636b lim: 120 exec/s: 0 rss: 71Mb L: 108/108 MS: 1 ChangeBit- 00:06:55.527 [2024-05-15 10:58:52.562452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.562480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.527 #24 NEW cov: 12135 ft: 14726 corp: 11/678b lim: 120 exec/s: 0 rss: 71Mb L: 42/108 MS: 1 CrossOver- 00:06:55.527 [2024-05-15 10:58:52.612718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.612746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.612783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7740398493674204011 len:27500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.612799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.527 #25 NEW cov: 12135 ft: 15057 corp: 12/742b lim: 120 exec/s: 0 rss: 71Mb L: 64/108 MS: 1 InsertRepeatedBytes- 00:06:55.527 [2024-05-15 10:58:52.652871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8608480565902407543 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.652898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.652942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.652957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.527 #26 NEW cov: 12135 ft: 15075 corp: 13/808b lim: 120 exec/s: 0 rss: 71Mb L: 66/108 MS: 1 InsertRepeatedBytes- 00:06:55.527 [2024-05-15 10:58:52.692955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.692983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.693024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.693041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.527 #27 NEW cov: 12135 ft: 15080 corp: 14/858b lim: 120 exec/s: 0 rss: 71Mb L: 50/108 MS: 1 CMP- DE: "\000\000\000\000\002.\301\276"- 00:06:55.527 [2024-05-15 10:58:52.733343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.733373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.733412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.733428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.733483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.733499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.733551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.733567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.527 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:55.527 #29 NEW cov: 12158 ft: 15144 corp: 15/956b lim: 120 exec/s: 0 rss: 71Mb L: 98/108 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\000\000\000\000\002.\301\276"- 00:06:55.527 [2024-05-15 10:58:52.773473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.773502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.773543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.773559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.773616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:142848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.773631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.527 [2024-05-15 10:58:52.773687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069414584575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-05-15 10:58:52.773703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.786 #30 NEW cov: 12158 ft: 15150 corp: 16/1074b lim: 120 exec/s: 0 rss: 71Mb L: 118/118 MS: 1 CrossOver- 00:06:55.787 [2024-05-15 10:58:52.813452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.813480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.813515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.813529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.813584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.813599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.787 #31 NEW cov: 12158 ft: 15174 corp: 17/1160b lim: 120 exec/s: 0 rss: 71Mb L: 86/118 MS: 1 CrossOver- 00:06:55.787 [2024-05-15 10:58:52.853248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.853278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.787 #32 NEW cov: 12158 ft: 15183 corp: 18/1203b lim: 120 exec/s: 32 rss: 71Mb L: 43/118 MS: 1 InsertByte- 00:06:55.787 [2024-05-15 10:58:52.893888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.893916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.893957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.893973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.894024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446742978492891135 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.894040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.894094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.894109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.787 #33 NEW cov: 12158 ft: 15194 corp: 19/1302b lim: 120 exec/s: 33 rss: 71Mb L: 99/118 MS: 1 ShuffleBytes- 00:06:55.787 [2024-05-15 10:58:52.943955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.943983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.944023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.944039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.944092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.944107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.944162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.944177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.787 #34 NEW cov: 12158 ft: 15204 corp: 20/1400b lim: 120 exec/s: 34 rss: 71Mb L: 98/118 MS: 1 ChangeByte- 00:06:55.787 [2024-05-15 10:58:52.994281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.994308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.994361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.994376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.994436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446742978492891135 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.994451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.994503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.994519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:52.994572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:52.994587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:55.787 #35 NEW cov: 12158 ft: 15253 corp: 21/1520b lim: 120 exec/s: 35 rss: 71Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:06:55.787 [2024-05-15 10:58:53.034187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:53.034215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:53.034253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:53.034268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:53.034318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:53.034333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.787 [2024-05-15 10:58:53.034393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340625612570639420 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.787 [2024-05-15 10:58:53.034408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.046 #36 NEW cov: 12158 ft: 15273 corp: 22/1618b lim: 120 exec/s: 36 rss: 71Mb L: 98/120 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:56.046 [2024-05-15 10:58:53.074322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.046 [2024-05-15 10:58:53.074349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.046 [2024-05-15 10:58:53.074397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.046 [2024-05-15 10:58:53.074413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.046 [2024-05-15 10:58:53.074466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.074480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.047 [2024-05-15 10:58:53.074535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.074551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.047 #37 NEW cov: 12158 ft: 15275 corp: 23/1726b lim: 120 exec/s: 37 rss: 71Mb L: 108/120 MS: 1 CrossOver- 00:06:56.047 [2024-05-15 10:58:53.113972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.113999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.047 #38 NEW cov: 12158 ft: 15295 corp: 24/1769b lim: 120 exec/s: 38 rss: 71Mb L: 43/120 MS: 1 PersAutoDict- DE: "\000\000\000\000\002.\301\276"- 00:06:56.047 [2024-05-15 10:58:53.164620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.164647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.047 [2024-05-15 10:58:53.164691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.164706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.047 [2024-05-15 10:58:53.164759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.164774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.047 [2024-05-15 10:58:53.164828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.164842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.047 #39 NEW cov: 12158 ft: 15311 corp: 25/1874b lim: 120 exec/s: 39 rss: 72Mb L: 105/120 MS: 1 EraseBytes- 00:06:56.047 [2024-05-15 10:58:53.204302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.204329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.047 #40 NEW cov: 12158 ft: 15361 corp: 26/1914b lim: 120 exec/s: 40 rss: 72Mb L: 40/120 MS: 1 EraseBytes- 00:06:56.047 [2024-05-15 10:58:53.254412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.254439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.047 #41 NEW cov: 12158 ft: 15401 corp: 27/1954b lim: 120 exec/s: 41 rss: 72Mb L: 40/120 MS: 1 ShuffleBytes- 00:06:56.047 [2024-05-15 10:58:53.304574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.047 [2024-05-15 10:58:53.304601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.306 #42 NEW cov: 12158 ft: 15414 corp: 28/1994b lim: 120 exec/s: 42 rss: 72Mb L: 40/120 MS: 1 ChangeBinInt- 00:06:56.306 [2024-05-15 10:58:53.355194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.355222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.355264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.355280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.355331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.355349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.355405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340625612570639420 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.355421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.306 #43 NEW cov: 12158 ft: 15431 corp: 29/2101b lim: 120 exec/s: 43 rss: 72Mb L: 107/120 MS: 1 CopyPart- 00:06:56.306 [2024-05-15 10:58:53.405315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.405343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.405395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.405411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.405463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.405478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.405531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340410370284600380 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.405547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.306 #44 NEW cov: 12158 ft: 15439 corp: 30/2201b lim: 120 exec/s: 44 rss: 72Mb L: 100/120 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:56.306 [2024-05-15 10:58:53.445149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.445176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.445208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.445223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.306 #46 NEW cov: 12158 ft: 15444 corp: 31/2269b lim: 120 exec/s: 46 rss: 72Mb L: 68/120 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:56.306 [2024-05-15 10:58:53.485543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.485570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.485618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.485632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.485686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.485702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.306 [2024-05-15 10:58:53.485758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.485774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.306 #47 NEW cov: 12158 ft: 15477 corp: 32/2367b lim: 120 exec/s: 47 rss: 72Mb L: 98/120 MS: 1 ChangeBit- 00:06:56.306 [2024-05-15 10:58:53.535223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13646891247544369347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.306 [2024-05-15 10:58:53.535251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.306 #48 NEW cov: 12158 ft: 15485 corp: 33/2410b lim: 120 exec/s: 48 rss: 72Mb L: 43/120 MS: 1 CMP- DE: "\303\275c\177I\373\205\000"- 00:06:56.566 [2024-05-15 10:58:53.585839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.585867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.585911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.585927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.585981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.585996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.586051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.586066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.566 #49 NEW cov: 12158 ft: 15490 corp: 34/2519b lim: 120 exec/s: 49 rss: 73Mb L: 109/120 MS: 1 CrossOver- 00:06:56.566 [2024-05-15 10:58:53.635831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10634005404898792339 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.635858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.635889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10634005407197270931 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.635904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.635958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10634005407197270931 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.635974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 #50 NEW cov: 12158 ft: 15562 corp: 35/2595b lim: 120 exec/s: 50 rss: 73Mb L: 76/120 MS: 1 InsertRepeatedBytes- 00:06:56.566 [2024-05-15 10:58:53.675934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.675962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.676002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.676017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.676076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.676092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 #51 NEW cov: 12158 ft: 15571 corp: 36/2686b lim: 120 exec/s: 51 rss: 73Mb L: 91/120 MS: 1 CopyPart- 00:06:56.566 [2024-05-15 10:58:53.725952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.725981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.726010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.726025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 #52 NEW cov: 12158 ft: 15591 corp: 37/2754b lim: 120 exec/s: 52 rss: 73Mb L: 68/120 MS: 1 ChangeByte- 00:06:56.566 [2024-05-15 10:58:53.766188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:172294144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.766217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.766253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.766269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.766322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.766338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 #53 NEW cov: 12158 ft: 15596 corp: 38/2830b lim: 120 exec/s: 53 rss: 73Mb L: 76/120 MS: 1 CrossOver- 00:06:56.566 [2024-05-15 10:58:53.806475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:613785354108928 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.806503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.806547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.806564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.806614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.806629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 10:58:53.806680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4340410370284600380 len:15421 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.566 [2024-05-15 10:58:53.806696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.566 #54 NEW cov: 12158 ft: 15647 corp: 39/2928b lim: 120 exec/s: 54 rss: 73Mb L: 98/120 MS: 1 ChangeBit- 00:06:56.826 [2024-05-15 10:58:53.846457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10634005404898792339 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.826 [2024-05-15 10:58:53.846489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.826 [2024-05-15 10:58:53.846520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10634005407197270931 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.826 [2024-05-15 10:58:53.846535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.826 [2024-05-15 10:58:53.846586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10634005407197270931 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.826 [2024-05-15 10:58:53.846602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.826 #55 NEW cov: 12158 ft: 15654 corp: 40/3004b lim: 120 exec/s: 27 rss: 73Mb L: 76/120 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:56.826 #55 DONE cov: 12158 ft: 15654 corp: 40/3004b lim: 120 exec/s: 27 rss: 73Mb 00:06:56.826 ###### Recommended dictionary. ###### 00:06:56.826 "\001\000" # Uses: 2 00:06:56.826 "\000\000\000\000\002.\301\276" # Uses: 2 00:06:56.826 "\377\377\377\377" # Uses: 0 00:06:56.826 "\303\275c\177I\373\205\000" # Uses: 0 00:06:56.826 ###### End of recommended dictionary. ###### 00:06:56.826 Done 55 runs in 2 second(s) 00:06:56.826 [2024-05-15 10:58:53.877770] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:56.826 10:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.826 10:58:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.826 10:58:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.826 10:58:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:56.826 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:56.827 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.827 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.827 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.827 10:58:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:56.827 [2024-05-15 10:58:54.045569] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:56.827 [2024-05-15 10:58:54.045658] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407829 ] 00:06:56.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.086 [2024-05-15 10:58:54.300310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.356 [2024-05-15 10:58:54.392513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.356 [2024-05-15 10:58:54.451667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.356 [2024-05-15 10:58:54.467613] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:57.356 [2024-05-15 10:58:54.468029] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:57.356 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.357 INFO: Seed: 394069580 00:06:57.357 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:57.357 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:57.357 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:57.357 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.357 #2 INITED exec/s: 0 rss: 63Mb 00:06:57.357 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.357 This may also happen if the target rejected all inputs we tried so far 00:06:57.357 [2024-05-15 10:58:54.513515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.357 [2024-05-15 10:58:54.513544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.357 [2024-05-15 10:58:54.513582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.357 [2024-05-15 10:58:54.513597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.357 [2024-05-15 10:58:54.513646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.357 [2024-05-15 10:58:54.513660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.357 [2024-05-15 10:58:54.513713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.357 [2024-05-15 10:58:54.513728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.357 [2024-05-15 10:58:54.513781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:57.357 [2024-05-15 10:58:54.513795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.617 NEW_FUNC[1/685]: 0x49f450 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:57.617 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.617 #27 NEW cov: 11857 ft: 11858 corp: 2/101b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 5 ChangeBit-InsertByte-EraseBytes-CrossOver-InsertRepeatedBytes- 00:06:57.617 [2024-05-15 10:58:54.823904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.617 [2024-05-15 10:58:54.823938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.617 [2024-05-15 10:58:54.823990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.617 [2024-05-15 10:58:54.824005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.617 #34 NEW cov: 11987 ft: 12821 corp: 3/160b lim: 100 exec/s: 0 rss: 70Mb L: 59/100 MS: 2 InsertByte-CrossOver- 00:06:57.617 [2024-05-15 10:58:54.864166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.617 [2024-05-15 10:58:54.864194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.617 [2024-05-15 10:58:54.864227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.617 [2024-05-15 10:58:54.864241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.617 [2024-05-15 10:58:54.864289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.617 [2024-05-15 10:58:54.864303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.617 [2024-05-15 10:58:54.864350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.617 [2024-05-15 10:58:54.864363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.876 #35 NEW cov: 11993 ft: 13077 corp: 4/251b lim: 100 exec/s: 0 rss: 71Mb L: 91/100 MS: 1 InsertRepeatedBytes- 00:06:57.876 [2024-05-15 10:58:54.913979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.876 [2024-05-15 10:58:54.914003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.876 #36 NEW cov: 12078 ft: 13682 corp: 5/276b lim: 100 exec/s: 0 rss: 71Mb L: 25/100 MS: 1 InsertRepeatedBytes- 00:06:57.876 [2024-05-15 10:58:54.954168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.876 [2024-05-15 10:58:54.954194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:54.954236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.876 [2024-05-15 10:58:54.954250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.876 #42 NEW cov: 12078 ft: 13831 corp: 6/335b lim: 100 exec/s: 0 rss: 71Mb L: 59/100 MS: 1 ChangeBinInt- 00:06:57.876 [2024-05-15 10:58:54.994210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.876 [2024-05-15 10:58:54.994236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.876 #43 NEW cov: 12078 ft: 13948 corp: 7/360b lim: 100 exec/s: 0 rss: 71Mb L: 25/100 MS: 1 CopyPart- 00:06:57.876 [2024-05-15 10:58:55.044687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.876 [2024-05-15 10:58:55.044714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:55.044752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.876 [2024-05-15 10:58:55.044765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:55.044814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.876 [2024-05-15 10:58:55.044828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:55.044877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.876 [2024-05-15 10:58:55.044892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.876 #44 NEW cov: 12078 ft: 13980 corp: 8/451b lim: 100 exec/s: 0 rss: 71Mb L: 91/100 MS: 1 ChangeBit- 00:06:57.876 [2024-05-15 10:58:55.094675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.876 [2024-05-15 10:58:55.094702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:55.094732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.876 [2024-05-15 10:58:55.094747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:55.094795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.876 [2024-05-15 10:58:55.094809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.876 #45 NEW cov: 12078 ft: 14231 corp: 9/518b lim: 100 exec/s: 0 rss: 71Mb L: 67/100 MS: 1 InsertRepeatedBytes- 00:06:57.876 [2024-05-15 10:58:55.134684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.876 [2024-05-15 10:58:55.134711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.876 [2024-05-15 10:58:55.134760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.876 [2024-05-15 10:58:55.134775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.135 #46 NEW cov: 12078 ft: 14313 corp: 10/563b lim: 100 exec/s: 0 rss: 71Mb L: 45/100 MS: 1 CrossOver- 00:06:58.135 [2024-05-15 10:58:55.185023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.135 [2024-05-15 10:58:55.185051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.185086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.135 [2024-05-15 10:58:55.185099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.185149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.135 [2024-05-15 10:58:55.185163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.185213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.135 [2024-05-15 10:58:55.185227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.135 #47 NEW cov: 12078 ft: 14356 corp: 11/654b lim: 100 exec/s: 0 rss: 71Mb L: 91/100 MS: 1 ChangeByte- 00:06:58.135 [2024-05-15 10:58:55.225265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.135 [2024-05-15 10:58:55.225290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.225342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.135 [2024-05-15 10:58:55.225355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.225406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.135 [2024-05-15 10:58:55.225420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.225470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.135 [2024-05-15 10:58:55.225484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.225532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.135 [2024-05-15 10:58:55.225546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.135 #48 NEW cov: 12078 ft: 14387 corp: 12/754b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 ChangeBinInt- 00:06:58.135 [2024-05-15 10:58:55.275168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.135 [2024-05-15 10:58:55.275196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.275224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.135 [2024-05-15 10:58:55.275238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.275289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.135 [2024-05-15 10:58:55.275303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.135 #54 NEW cov: 12078 ft: 14448 corp: 13/821b lim: 100 exec/s: 0 rss: 72Mb L: 67/100 MS: 1 ChangeBit- 00:06:58.135 [2024-05-15 10:58:55.325291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.135 [2024-05-15 10:58:55.325319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.325345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.135 [2024-05-15 10:58:55.325358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.325411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.135 [2024-05-15 10:58:55.325425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.135 #55 NEW cov: 12078 ft: 14506 corp: 14/881b lim: 100 exec/s: 0 rss: 72Mb L: 60/100 MS: 1 InsertByte- 00:06:58.135 [2024-05-15 10:58:55.365646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.135 [2024-05-15 10:58:55.365673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.135 [2024-05-15 10:58:55.365724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.136 [2024-05-15 10:58:55.365737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.136 [2024-05-15 10:58:55.365786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.136 [2024-05-15 10:58:55.365800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.136 [2024-05-15 10:58:55.365849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.136 [2024-05-15 10:58:55.365864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.136 [2024-05-15 10:58:55.365914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.136 [2024-05-15 10:58:55.365928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.136 #56 NEW cov: 12078 ft: 14529 corp: 15/981b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 CrossOver- 00:06:58.396 [2024-05-15 10:58:55.415808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.396 [2024-05-15 10:58:55.415835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.396 [2024-05-15 10:58:55.415880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.396 [2024-05-15 10:58:55.415893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.396 [2024-05-15 10:58:55.415946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.396 [2024-05-15 10:58:55.415961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.396 [2024-05-15 10:58:55.416010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.396 [2024-05-15 10:58:55.416024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.396 [2024-05-15 10:58:55.416073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.396 [2024-05-15 10:58:55.416087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.396 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:58.397 #57 NEW cov: 12101 ft: 14602 corp: 16/1081b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 ChangeBit- 00:06:58.397 [2024-05-15 10:58:55.465613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.397 [2024-05-15 10:58:55.465640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.465679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.397 [2024-05-15 10:58:55.465694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.397 #58 NEW cov: 12101 ft: 14630 corp: 17/1140b lim: 100 exec/s: 0 rss: 72Mb L: 59/100 MS: 1 CrossOver- 00:06:58.397 [2024-05-15 10:58:55.516029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.397 [2024-05-15 10:58:55.516056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.516101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.397 [2024-05-15 10:58:55.516115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.516165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.397 [2024-05-15 10:58:55.516179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.516227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.397 [2024-05-15 10:58:55.516240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.516291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.397 [2024-05-15 10:58:55.516306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.397 #59 NEW cov: 12101 ft: 14653 corp: 18/1240b lim: 100 exec/s: 59 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:06:58.397 [2024-05-15 10:58:55.555938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.397 [2024-05-15 10:58:55.555965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.555992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.397 [2024-05-15 10:58:55.556007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.556059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.397 [2024-05-15 10:58:55.556072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.397 #63 NEW cov: 12101 ft: 14683 corp: 19/1303b lim: 100 exec/s: 63 rss: 72Mb L: 63/100 MS: 4 EraseBytes-ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:58.397 [2024-05-15 10:58:55.596074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.397 [2024-05-15 10:58:55.596101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.596139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.397 [2024-05-15 10:58:55.596152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.596204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.397 [2024-05-15 10:58:55.596219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.397 #64 NEW cov: 12101 ft: 14707 corp: 20/1378b lim: 100 exec/s: 64 rss: 72Mb L: 75/100 MS: 1 InsertRepeatedBytes- 00:06:58.397 [2024-05-15 10:58:55.636389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.397 [2024-05-15 10:58:55.636416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.636465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.397 [2024-05-15 10:58:55.636479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.636530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.397 [2024-05-15 10:58:55.636543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.636591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.397 [2024-05-15 10:58:55.636606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.397 [2024-05-15 10:58:55.636655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.397 [2024-05-15 10:58:55.636669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.657 #70 NEW cov: 12101 ft: 14712 corp: 21/1478b lim: 100 exec/s: 70 rss: 72Mb L: 100/100 MS: 1 ShuffleBytes- 00:06:58.657 [2024-05-15 10:58:55.686586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.657 [2024-05-15 10:58:55.686613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.686658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.657 [2024-05-15 10:58:55.686671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.686719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.657 [2024-05-15 10:58:55.686733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.686784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.657 [2024-05-15 10:58:55.686797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.686849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.657 [2024-05-15 10:58:55.686866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.657 #71 NEW cov: 12101 ft: 14757 corp: 22/1578b lim: 100 exec/s: 71 rss: 73Mb L: 100/100 MS: 1 ChangeBit- 00:06:58.657 [2024-05-15 10:58:55.736707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.657 [2024-05-15 10:58:55.736734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.736772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.657 [2024-05-15 10:58:55.736784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.736835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.657 [2024-05-15 10:58:55.736849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.736897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.657 [2024-05-15 10:58:55.736910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.657 [2024-05-15 10:58:55.736961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.658 [2024-05-15 10:58:55.736977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.658 #72 NEW cov: 12101 ft: 14771 corp: 23/1678b lim: 100 exec/s: 72 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:06:58.658 [2024-05-15 10:58:55.776823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.658 [2024-05-15 10:58:55.776850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.776894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.658 [2024-05-15 10:58:55.776906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.776958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.658 [2024-05-15 10:58:55.776971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.777022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.658 [2024-05-15 10:58:55.777036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.777088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.658 [2024-05-15 10:58:55.777103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.658 #73 NEW cov: 12101 ft: 14793 corp: 24/1778b lim: 100 exec/s: 73 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:06:58.658 [2024-05-15 10:58:55.826488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.658 [2024-05-15 10:58:55.826514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.658 #74 NEW cov: 12101 ft: 14807 corp: 25/1803b lim: 100 exec/s: 74 rss: 73Mb L: 25/100 MS: 1 CMP- DE: "\000\000\000\007"- 00:06:58.658 [2024-05-15 10:58:55.867008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.658 [2024-05-15 10:58:55.867034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.867081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.658 [2024-05-15 10:58:55.867098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.867146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.658 [2024-05-15 10:58:55.867160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.867211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.658 [2024-05-15 10:58:55.867225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.867271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.658 [2024-05-15 10:58:55.867286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.658 #75 NEW cov: 12101 ft: 14828 corp: 26/1903b lim: 100 exec/s: 75 rss: 73Mb L: 100/100 MS: 1 CrossOver- 00:06:58.658 [2024-05-15 10:58:55.917126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.658 [2024-05-15 10:58:55.917152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.917198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.658 [2024-05-15 10:58:55.917212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.917260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.658 [2024-05-15 10:58:55.917278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.658 [2024-05-15 10:58:55.917351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.658 [2024-05-15 10:58:55.917370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.917 #76 NEW cov: 12101 ft: 14836 corp: 27/1991b lim: 100 exec/s: 76 rss: 73Mb L: 88/100 MS: 1 EraseBytes- 00:06:58.917 [2024-05-15 10:58:55.957294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.917 [2024-05-15 10:58:55.957320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:55.957367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.917 [2024-05-15 10:58:55.957387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:55.957437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.917 [2024-05-15 10:58:55.957451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:55.957503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.917 [2024-05-15 10:58:55.957517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:55.957565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.917 [2024-05-15 10:58:55.957579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.917 #77 NEW cov: 12101 ft: 14863 corp: 28/2091b lim: 100 exec/s: 77 rss: 73Mb L: 100/100 MS: 1 ChangeByte- 00:06:58.917 [2024-05-15 10:58:56.007043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.917 [2024-05-15 10:58:56.007073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.917 #78 NEW cov: 12101 ft: 14883 corp: 29/2127b lim: 100 exec/s: 78 rss: 73Mb L: 36/100 MS: 1 EraseBytes- 00:06:58.917 [2024-05-15 10:58:56.057581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.917 [2024-05-15 10:58:56.057607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:56.057654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.917 [2024-05-15 10:58:56.057668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:56.057717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.917 [2024-05-15 10:58:56.057731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:56.057782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.917 [2024-05-15 10:58:56.057796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.917 [2024-05-15 10:58:56.057847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.917 [2024-05-15 10:58:56.057862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.917 #79 NEW cov: 12101 ft: 14904 corp: 30/2227b lim: 100 exec/s: 79 rss: 73Mb L: 100/100 MS: 1 ChangeBit- 00:06:58.917 [2024-05-15 10:58:56.107650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.918 [2024-05-15 10:58:56.107677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.107720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.918 [2024-05-15 10:58:56.107733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.107784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.918 [2024-05-15 10:58:56.107797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.107847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.918 [2024-05-15 10:58:56.107861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.918 #80 NEW cov: 12101 ft: 14930 corp: 31/2308b lim: 100 exec/s: 80 rss: 73Mb L: 81/100 MS: 1 EraseBytes- 00:06:58.918 [2024-05-15 10:58:56.157852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.918 [2024-05-15 10:58:56.157878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.157926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.918 [2024-05-15 10:58:56.157939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.157989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.918 [2024-05-15 10:58:56.158003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.158054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.918 [2024-05-15 10:58:56.158071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.918 [2024-05-15 10:58:56.158123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:58.918 [2024-05-15 10:58:56.158137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:58.918 #81 NEW cov: 12101 ft: 14939 corp: 32/2408b lim: 100 exec/s: 81 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:06:59.180 [2024-05-15 10:58:56.197922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.180 [2024-05-15 10:58:56.197949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.180 [2024-05-15 10:58:56.197995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.180 [2024-05-15 10:58:56.198008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.180 [2024-05-15 10:58:56.198060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.180 [2024-05-15 10:58:56.198074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.180 [2024-05-15 10:58:56.198127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.180 [2024-05-15 10:58:56.198140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.180 [2024-05-15 10:58:56.198191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:59.180 [2024-05-15 10:58:56.198205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.180 #87 NEW cov: 12101 ft: 14958 corp: 33/2508b lim: 100 exec/s: 87 rss: 74Mb L: 100/100 MS: 1 ChangeByte- 00:06:59.180 [2024-05-15 10:58:56.238101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.180 [2024-05-15 10:58:56.238127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.180 [2024-05-15 10:58:56.238171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.180 [2024-05-15 10:58:56.238184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.180 [2024-05-15 10:58:56.238234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.181 [2024-05-15 10:58:56.238247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.238297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.181 [2024-05-15 10:58:56.238310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.238359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:59.181 [2024-05-15 10:58:56.238373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.181 #88 NEW cov: 12101 ft: 14990 corp: 34/2608b lim: 100 exec/s: 88 rss: 74Mb L: 100/100 MS: 1 ChangeBit- 00:06:59.181 [2024-05-15 10:58:56.278185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.181 [2024-05-15 10:58:56.278212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.278256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.181 [2024-05-15 10:58:56.278270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.278323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.181 [2024-05-15 10:58:56.278338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.278390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.181 [2024-05-15 10:58:56.278405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.278454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:59.181 [2024-05-15 10:58:56.278469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.181 #89 NEW cov: 12101 ft: 15020 corp: 35/2708b lim: 100 exec/s: 89 rss: 74Mb L: 100/100 MS: 1 ChangeBinInt- 00:06:59.181 [2024-05-15 10:58:56.328392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.181 [2024-05-15 10:58:56.328424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.328454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.181 [2024-05-15 10:58:56.328469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.328519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.181 [2024-05-15 10:58:56.328533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.328585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.181 [2024-05-15 10:58:56.328599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.328652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:59.181 [2024-05-15 10:58:56.328666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.181 #90 NEW cov: 12101 ft: 15021 corp: 36/2808b lim: 100 exec/s: 90 rss: 74Mb L: 100/100 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:06:59.181 [2024-05-15 10:58:56.368445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.181 [2024-05-15 10:58:56.368476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.181 [2024-05-15 10:58:56.368508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.182 [2024-05-15 10:58:56.368521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.182 [2024-05-15 10:58:56.368572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.182 [2024-05-15 10:58:56.368586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.182 [2024-05-15 10:58:56.368636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.182 [2024-05-15 10:58:56.368650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.182 [2024-05-15 10:58:56.368699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:59.182 [2024-05-15 10:58:56.368714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.182 #91 NEW cov: 12101 ft: 15026 corp: 37/2908b lim: 100 exec/s: 91 rss: 74Mb L: 100/100 MS: 1 CopyPart- 00:06:59.182 [2024-05-15 10:58:56.408227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.182 [2024-05-15 10:58:56.408253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.182 [2024-05-15 10:58:56.408280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.182 [2024-05-15 10:58:56.408293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.182 #92 NEW cov: 12101 ft: 15029 corp: 38/2956b lim: 100 exec/s: 92 rss: 74Mb L: 48/100 MS: 1 CopyPart- 00:06:59.445 [2024-05-15 10:58:56.458717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.445 [2024-05-15 10:58:56.458744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.445 [2024-05-15 10:58:56.458786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.445 [2024-05-15 10:58:56.458799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.445 [2024-05-15 10:58:56.458849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.445 [2024-05-15 10:58:56.458863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.445 [2024-05-15 10:58:56.458912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.445 [2024-05-15 10:58:56.458926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.445 [2024-05-15 10:58:56.458976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:59.445 [2024-05-15 10:58:56.458989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.445 #93 NEW cov: 12101 ft: 15033 corp: 39/3056b lim: 100 exec/s: 93 rss: 74Mb L: 100/100 MS: 1 ChangeByte- 00:06:59.445 [2024-05-15 10:58:56.498642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.445 [2024-05-15 10:58:56.498668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.445 [2024-05-15 10:58:56.498702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.445 [2024-05-15 10:58:56.498716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.445 [2024-05-15 10:58:56.498763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.445 [2024-05-15 10:58:56.498777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.445 #94 NEW cov: 12101 ft: 15040 corp: 40/3129b lim: 100 exec/s: 47 rss: 74Mb L: 73/100 MS: 1 InsertRepeatedBytes- 00:06:59.445 #94 DONE cov: 12101 ft: 15040 corp: 40/3129b lim: 100 exec/s: 47 rss: 74Mb 00:06:59.445 ###### Recommended dictionary. ###### 00:06:59.445 "\000\000\000\007" # Uses: 1 00:06:59.445 ###### End of recommended dictionary. ###### 00:06:59.445 Done 94 runs in 2 second(s) 00:06:59.445 [2024-05-15 10:58:56.520858] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:59.445 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.446 10:58:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:59.446 [2024-05-15 10:58:56.687968] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:06:59.446 [2024-05-15 10:58:56.688035] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408234 ] 00:06:59.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.704 [2024-05-15 10:58:56.940047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.963 [2024-05-15 10:58:57.027550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.963 [2024-05-15 10:58:57.086993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.963 [2024-05-15 10:58:57.102948] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:59.963 [2024-05-15 10:58:57.103357] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:59.963 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.963 INFO: Seed: 3030079384 00:06:59.963 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:06:59.963 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:06:59.963 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:59.963 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.963 #2 INITED exec/s: 0 rss: 63Mb 00:06:59.963 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.963 This may also happen if the target rejected all inputs we tried so far 00:06:59.963 [2024-05-15 10:58:57.169703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:14 00:06:59.963 [2024-05-15 10:58:57.169742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.250 NEW_FUNC[1/683]: 0x4a2410 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:00.250 NEW_FUNC[2/683]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.250 #6 NEW cov: 11829 ft: 11831 corp: 2/11b lim: 50 exec/s: 0 rss: 70Mb L: 10/10 MS: 4 CopyPart-CrossOver-ShuffleBytes-CMP- DE: "\001\000\000\000\000\000\000\015"- 00:07:00.509 [2024-05-15 10:58:57.520550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8680820738723707000 len:30841 00:07:00.509 [2024-05-15 10:58:57.520593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.509 [2024-05-15 10:58:57.520709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 00:07:00.509 [2024-05-15 10:58:57.520733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.509 [2024-05-15 10:58:57.520848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:00.509 [2024-05-15 10:58:57.520873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.509 [2024-05-15 10:58:57.520991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:00.509 [2024-05-15 10:58:57.521017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.509 NEW_FUNC[1/2]: 0xef9a10 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:07:00.509 NEW_FUNC[2/2]: 0x15d94e0 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1224 00:07:00.509 #7 NEW cov: 11965 ft: 12875 corp: 3/53b lim: 50 exec/s: 0 rss: 70Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:07:00.509 [2024-05-15 10:58:57.570106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:2 00:07:00.509 [2024-05-15 10:58:57.570133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.509 #8 NEW cov: 11971 ft: 13080 corp: 4/71b lim: 50 exec/s: 0 rss: 70Mb L: 18/42 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\015"- 00:07:00.509 [2024-05-15 10:58:57.620186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:00.509 [2024-05-15 10:58:57.620218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.509 #9 NEW cov: 12056 ft: 13504 corp: 5/82b lim: 50 exec/s: 0 rss: 70Mb L: 11/42 MS: 1 InsertByte- 00:07:00.509 [2024-05-15 10:58:57.660313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:00.509 [2024-05-15 10:58:57.660344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.509 #10 NEW cov: 12056 ft: 13575 corp: 6/95b lim: 50 exec/s: 0 rss: 70Mb L: 13/42 MS: 1 CopyPart- 00:07:00.509 [2024-05-15 10:58:57.711057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:00.509 [2024-05-15 10:58:57.711087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.509 [2024-05-15 10:58:57.711207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820738774038648 len:30841 00:07:00.509 [2024-05-15 10:58:57.711233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.509 [2024-05-15 10:58:57.711349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:00.509 [2024-05-15 10:58:57.711370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.509 [2024-05-15 10:58:57.711498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:00.509 [2024-05-15 10:58:57.711517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.509 #11 NEW cov: 12056 ft: 13627 corp: 7/141b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 CrossOver- 00:07:00.509 [2024-05-15 10:58:57.750556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:00.509 [2024-05-15 10:58:57.750584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.509 #12 NEW cov: 12056 ft: 13770 corp: 8/152b lim: 50 exec/s: 0 rss: 70Mb L: 11/46 MS: 1 ShuffleBytes- 00:07:00.767 [2024-05-15 10:58:57.790686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:00.767 [2024-05-15 10:58:57.790717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.767 #14 NEW cov: 12056 ft: 13850 corp: 9/164b lim: 50 exec/s: 0 rss: 70Mb L: 12/46 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:00.767 [2024-05-15 10:58:57.830897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:00.767 [2024-05-15 10:58:57.830928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.767 [2024-05-15 10:58:57.831026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:65536 len:3371 00:07:00.767 [2024-05-15 10:58:57.831049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.767 #15 NEW cov: 12056 ft: 14140 corp: 10/184b lim: 50 exec/s: 0 rss: 71Mb L: 20/46 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\015"- 00:07:00.767 [2024-05-15 10:58:57.880971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168438273 len:34 00:07:00.767 [2024-05-15 10:58:57.880996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.767 #16 NEW cov: 12056 ft: 14161 corp: 11/196b lim: 50 exec/s: 0 rss: 71Mb L: 12/46 MS: 1 InsertByte- 00:07:00.767 [2024-05-15 10:58:57.921222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:248 00:07:00.767 [2024-05-15 10:58:57.921255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.767 [2024-05-15 10:58:57.921362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294901504 len:3371 00:07:00.767 [2024-05-15 10:58:57.921387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.767 #17 NEW cov: 12056 ft: 14237 corp: 12/216b lim: 50 exec/s: 0 rss: 71Mb L: 20/46 MS: 1 ChangeBinInt- 00:07:00.767 [2024-05-15 10:58:57.971711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:00.767 [2024-05-15 10:58:57.971740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.767 [2024-05-15 10:58:57.971803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820738774038648 len:30841 00:07:00.767 [2024-05-15 10:58:57.971822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.768 [2024-05-15 10:58:57.971929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:00.768 [2024-05-15 10:58:57.971950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.768 [2024-05-15 10:58:57.972068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:00.768 [2024-05-15 10:58:57.972088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.768 #18 NEW cov: 12056 ft: 14265 corp: 13/262b lim: 50 exec/s: 0 rss: 71Mb L: 46/46 MS: 1 ShuffleBytes- 00:07:00.768 [2024-05-15 10:58:58.021417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:723391789674790912 len:1 00:07:00.768 [2024-05-15 10:58:58.021442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.025 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:01.025 #21 NEW cov: 12079 ft: 14305 corp: 14/279b lim: 50 exec/s: 0 rss: 71Mb L: 17/46 MS: 3 EraseBytes-EraseBytes-CrossOver- 00:07:01.026 [2024-05-15 10:58:58.062038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:36284052144384 len:1 00:07:01.026 [2024-05-15 10:58:58.062068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.062115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820739101691256 len:30841 00:07:01.026 [2024-05-15 10:58:58.062133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.062258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:01.026 [2024-05-15 10:58:58.062281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.062397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.026 [2024-05-15 10:58:58.062420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.026 #23 NEW cov: 12079 ft: 14326 corp: 15/327b lim: 50 exec/s: 0 rss: 71Mb L: 48/48 MS: 2 EraseBytes-CrossOver- 00:07:01.026 [2024-05-15 10:58:58.101606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:720857415372767242 len:1 00:07:01.026 [2024-05-15 10:58:58.101634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.026 #24 NEW cov: 12079 ft: 14400 corp: 16/344b lim: 50 exec/s: 0 rss: 71Mb L: 17/48 MS: 1 ShuffleBytes- 00:07:01.026 [2024-05-15 10:58:58.152333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:36284052144384 len:1 00:07:01.026 [2024-05-15 10:58:58.152364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.152436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820739101691256 len:30841 00:07:01.026 [2024-05-15 10:58:58.152457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.152575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:01.026 [2024-05-15 10:58:58.152595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.152713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.026 [2024-05-15 10:58:58.152736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.026 #25 NEW cov: 12079 ft: 14444 corp: 17/392b lim: 50 exec/s: 25 rss: 71Mb L: 48/48 MS: 1 ChangeBinInt- 00:07:01.026 [2024-05-15 10:58:58.201944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427834 len:34 00:07:01.026 [2024-05-15 10:58:58.201975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.026 #26 NEW cov: 12079 ft: 14497 corp: 18/404b lim: 50 exec/s: 26 rss: 71Mb L: 12/48 MS: 1 InsertByte- 00:07:01.026 [2024-05-15 10:58:58.242272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:248 00:07:01.026 [2024-05-15 10:58:58.242301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.026 [2024-05-15 10:58:58.242410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294901504 len:1 00:07:01.026 [2024-05-15 10:58:58.242432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.026 #27 NEW cov: 12079 ft: 14536 corp: 19/429b lim: 50 exec/s: 27 rss: 71Mb L: 25/48 MS: 1 CopyPart- 00:07:01.284 [2024-05-15 10:58:58.293028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8680820738723707000 len:30841 00:07:01.284 [2024-05-15 10:58:58.293062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.293128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:257 00:07:01.284 [2024-05-15 10:58:58.293146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.293251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3791631488647168 len:30841 00:07:01.284 [2024-05-15 10:58:58.293273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.293394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.284 [2024-05-15 10:58:58.293417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.293524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:8680820740569200760 len:30841 00:07:01.284 [2024-05-15 10:58:58.293545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.284 #28 NEW cov: 12079 ft: 14628 corp: 20/479b lim: 50 exec/s: 28 rss: 71Mb L: 50/50 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\015"- 00:07:01.284 [2024-05-15 10:58:58.342533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:723390690163163136 len:11 00:07:01.284 [2024-05-15 10:58:58.342566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.342668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4462804992 len:1 00:07:01.284 [2024-05-15 10:58:58.342684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.284 #29 NEW cov: 12079 ft: 14641 corp: 21/507b lim: 50 exec/s: 29 rss: 71Mb L: 28/50 MS: 1 CopyPart- 00:07:01.284 [2024-05-15 10:58:58.382592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:723390690163163136 len:11 00:07:01.284 [2024-05-15 10:58:58.382621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.382730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:197736333312 len:1 00:07:01.284 [2024-05-15 10:58:58.382752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.284 #30 NEW cov: 12079 ft: 14662 corp: 22/535b lim: 50 exec/s: 30 rss: 71Mb L: 28/50 MS: 1 ChangeByte- 00:07:01.284 [2024-05-15 10:58:58.433170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:36284052144384 len:1 00:07:01.284 [2024-05-15 10:58:58.433203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.433252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820739101691256 len:30841 00:07:01.284 [2024-05-15 10:58:58.433275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.433396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:01.284 [2024-05-15 10:58:58.433416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.284 [2024-05-15 10:58:58.433532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.284 [2024-05-15 10:58:58.433552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.284 #31 NEW cov: 12079 ft: 14678 corp: 23/583b lim: 50 exec/s: 31 rss: 72Mb L: 48/50 MS: 1 ShuffleBytes- 00:07:01.284 [2024-05-15 10:58:58.472588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:46 00:07:01.284 [2024-05-15 10:58:58.472614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.284 #32 NEW cov: 12079 ft: 14728 corp: 24/596b lim: 50 exec/s: 32 rss: 72Mb L: 13/50 MS: 1 InsertByte- 00:07:01.285 [2024-05-15 10:58:58.512823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:01.285 [2024-05-15 10:58:58.512851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.285 #33 NEW cov: 12079 ft: 14776 corp: 25/607b lim: 50 exec/s: 33 rss: 72Mb L: 11/50 MS: 1 ShuffleBytes- 00:07:01.543 [2024-05-15 10:58:58.563583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8680820738723707000 len:30841 00:07:01.543 [2024-05-15 10:58:58.563615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.563738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 00:07:01.543 [2024-05-15 10:58:58.563763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.563871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:01.543 [2024-05-15 10:58:58.563888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.564001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.543 [2024-05-15 10:58:58.564026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.543 #34 NEW cov: 12079 ft: 14813 corp: 26/649b lim: 50 exec/s: 34 rss: 72Mb L: 42/50 MS: 1 ShuffleBytes- 00:07:01.543 [2024-05-15 10:58:58.603470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:01.543 [2024-05-15 10:58:58.603503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.603622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:65536 len:3371 00:07:01.543 [2024-05-15 10:58:58.603641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.603754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:01.543 [2024-05-15 10:58:58.603776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.543 #35 NEW cov: 12079 ft: 15043 corp: 27/682b lim: 50 exec/s: 35 rss: 72Mb L: 33/50 MS: 1 InsertRepeatedBytes- 00:07:01.543 [2024-05-15 10:58:58.643745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:8449 00:07:01.543 [2024-05-15 10:58:58.643783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.643888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820738774038648 len:30841 00:07:01.543 [2024-05-15 10:58:58.643914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.644029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30884 00:07:01.543 [2024-05-15 10:58:58.644047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.644161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.543 [2024-05-15 10:58:58.644186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.543 #36 NEW cov: 12079 ft: 15075 corp: 28/728b lim: 50 exec/s: 36 rss: 72Mb L: 46/50 MS: 1 ChangeByte- 00:07:01.543 [2024-05-15 10:58:58.683864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:36284052144384 len:1 00:07:01.543 [2024-05-15 10:58:58.683893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.683947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8680820739104964621 len:30841 00:07:01.543 [2024-05-15 10:58:58.683969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.684084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 00:07:01.543 [2024-05-15 10:58:58.684106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.543 [2024-05-15 10:58:58.684220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 00:07:01.543 [2024-05-15 10:58:58.684242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.543 #37 NEW cov: 12079 ft: 15078 corp: 29/777b lim: 50 exec/s: 37 rss: 72Mb L: 49/50 MS: 1 InsertByte- 00:07:01.543 [2024-05-15 10:58:58.733554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:723391789674790912 len:1 00:07:01.543 [2024-05-15 10:58:58.733586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.543 #38 NEW cov: 12079 ft: 15130 corp: 30/794b lim: 50 exec/s: 38 rss: 72Mb L: 17/50 MS: 1 ChangeByte- 00:07:01.543 [2024-05-15 10:58:58.773178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:31234 00:07:01.543 [2024-05-15 10:58:58.773210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.543 #39 NEW cov: 12079 ft: 15147 corp: 31/812b lim: 50 exec/s: 39 rss: 72Mb L: 18/50 MS: 1 ChangeByte- 00:07:01.802 [2024-05-15 10:58:58.823732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427776 len:14 00:07:01.802 [2024-05-15 10:58:58.823762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.802 #40 NEW cov: 12079 ft: 15158 corp: 32/823b lim: 50 exec/s: 40 rss: 72Mb L: 11/50 MS: 1 EraseBytes- 00:07:01.802 [2024-05-15 10:58:58.863845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168493312 len:14 00:07:01.802 [2024-05-15 10:58:58.863874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.802 #41 NEW cov: 12079 ft: 15166 corp: 33/834b lim: 50 exec/s: 41 rss: 72Mb L: 11/50 MS: 1 ChangeBinInt- 00:07:01.802 [2024-05-15 10:58:58.914151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:96 len:248 00:07:01.802 [2024-05-15 10:58:58.914180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.802 [2024-05-15 10:58:58.914295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294901504 len:3371 00:07:01.802 [2024-05-15 10:58:58.914314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.802 #42 NEW cov: 12079 ft: 15177 corp: 34/854b lim: 50 exec/s: 42 rss: 72Mb L: 20/50 MS: 1 ChangeByte- 00:07:01.802 [2024-05-15 10:58:58.954172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167837706 len:8449 00:07:01.802 [2024-05-15 10:58:58.954198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.802 #43 NEW cov: 12079 ft: 15189 corp: 35/865b lim: 50 exec/s: 43 rss: 72Mb L: 11/50 MS: 1 ShuffleBytes- 00:07:01.802 [2024-05-15 10:58:58.994394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:257 00:07:01.802 [2024-05-15 10:58:58.994423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.802 [2024-05-15 10:58:58.994546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3659174697238528 len:3371 00:07:01.802 [2024-05-15 10:58:58.994568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.802 #44 NEW cov: 12079 ft: 15201 corp: 36/885b lim: 50 exec/s: 44 rss: 72Mb L: 20/50 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\015"- 00:07:01.802 [2024-05-15 10:58:59.034479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:27021597932650496 len:1 00:07:01.802 [2024-05-15 10:58:59.034520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.802 [2024-05-15 10:58:59.034637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374967954664587262 len:1 00:07:01.802 [2024-05-15 10:58:59.034661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.802 #45 NEW cov: 12079 ft: 15214 corp: 37/911b lim: 50 exec/s: 45 rss: 72Mb L: 26/50 MS: 1 CrossOver- 00:07:02.060 [2024-05-15 10:58:59.074476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168427816 len:14 00:07:02.060 [2024-05-15 10:58:59.074504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.060 #46 NEW cov: 12079 ft: 15220 corp: 38/921b lim: 50 exec/s: 46 rss: 72Mb L: 10/50 MS: 1 ChangeByte- 00:07:02.060 [2024-05-15 10:58:59.114633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:704708609 len:1 00:07:02.060 [2024-05-15 10:58:59.114663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.060 #50 NEW cov: 12079 ft: 15284 corp: 39/933b lim: 50 exec/s: 50 rss: 72Mb L: 12/50 MS: 4 EraseBytes-InsertByte-EraseBytes-PersAutoDict- DE: "\001\000\000\000\000\000\000\015"- 00:07:02.060 [2024-05-15 10:58:59.165205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:723391789674790912 len:1 00:07:02.060 [2024-05-15 10:58:59.165236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.060 [2024-05-15 10:58:59.165282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1591483802437686806 len:5655 00:07:02.060 [2024-05-15 10:58:59.165303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.060 [2024-05-15 10:58:59.165401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1591483802437686806 len:5655 00:07:02.060 [2024-05-15 10:58:59.165422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.060 [2024-05-15 10:58:59.165536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1591483802437686806 len:1 00:07:02.061 [2024-05-15 10:58:59.165560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.061 #51 NEW cov: 12079 ft: 15289 corp: 40/978b lim: 50 exec/s: 25 rss: 72Mb L: 45/50 MS: 1 InsertRepeatedBytes- 00:07:02.061 #51 DONE cov: 12079 ft: 15289 corp: 40/978b lim: 50 exec/s: 25 rss: 72Mb 00:07:02.061 ###### Recommended dictionary. ###### 00:07:02.061 "\001\000\000\000\000\000\000\015" # Uses: 5 00:07:02.061 ###### End of recommended dictionary. ###### 00:07:02.061 Done 51 runs in 2 second(s) 00:07:02.061 [2024-05-15 10:58:59.194811] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:02.061 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:02.319 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:02.319 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:02.319 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:02.319 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:02.319 10:58:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:02.319 [2024-05-15 10:58:59.362896] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:02.319 [2024-05-15 10:58:59.362960] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408765 ] 00:07:02.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.577 [2024-05-15 10:58:59.614755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.577 [2024-05-15 10:58:59.703612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.577 [2024-05-15 10:58:59.762851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.577 [2024-05-15 10:58:59.778800] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:02.577 [2024-05-15 10:58:59.779216] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:02.577 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.577 INFO: Seed: 1409104728 00:07:02.577 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:07:02.577 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:07:02.577 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:02.577 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.577 #2 INITED exec/s: 0 rss: 63Mb 00:07:02.577 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.577 This may also happen if the target rejected all inputs we tried so far 00:07:02.577 [2024-05-15 10:58:59.827937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.577 [2024-05-15 10:58:59.827968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.577 [2024-05-15 10:58:59.828026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.577 [2024-05-15 10:58:59.828042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.093 NEW_FUNC[1/687]: 0x4a3fd0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:03.094 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:03.094 #4 NEW cov: 11893 ft: 11888 corp: 2/37b lim: 90 exec/s: 0 rss: 70Mb L: 36/36 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:03.094 [2024-05-15 10:59:00.158904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.094 [2024-05-15 10:59:00.158956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.094 [2024-05-15 10:59:00.159033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.094 [2024-05-15 10:59:00.159050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.094 #10 NEW cov: 12023 ft: 12507 corp: 3/73b lim: 90 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 CrossOver- 00:07:03.094 [2024-05-15 10:59:00.208636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.094 [2024-05-15 10:59:00.208667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.094 #16 NEW cov: 12029 ft: 13593 corp: 4/101b lim: 90 exec/s: 0 rss: 71Mb L: 28/36 MS: 1 EraseBytes- 00:07:03.094 [2024-05-15 10:59:00.258907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.094 [2024-05-15 10:59:00.258938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.094 [2024-05-15 10:59:00.258984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.094 [2024-05-15 10:59:00.259000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.094 #22 NEW cov: 12114 ft: 14000 corp: 5/137b lim: 90 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:03.094 [2024-05-15 10:59:00.298917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.094 [2024-05-15 10:59:00.298946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.094 #23 NEW cov: 12114 ft: 14085 corp: 6/165b lim: 90 exec/s: 0 rss: 71Mb L: 28/36 MS: 1 CrossOver- 00:07:03.094 [2024-05-15 10:59:00.349552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.094 [2024-05-15 10:59:00.349581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.094 [2024-05-15 10:59:00.349623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.094 [2024-05-15 10:59:00.349639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.094 [2024-05-15 10:59:00.349693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.094 [2024-05-15 10:59:00.349710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.094 [2024-05-15 10:59:00.349768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:03.094 [2024-05-15 10:59:00.349782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.352 #24 NEW cov: 12114 ft: 14559 corp: 7/238b lim: 90 exec/s: 0 rss: 71Mb L: 73/73 MS: 1 InsertRepeatedBytes- 00:07:03.352 [2024-05-15 10:59:00.389335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.352 [2024-05-15 10:59:00.389363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.352 [2024-05-15 10:59:00.389411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.352 [2024-05-15 10:59:00.389428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.352 #25 NEW cov: 12114 ft: 14638 corp: 8/274b lim: 90 exec/s: 0 rss: 71Mb L: 36/73 MS: 1 ChangeByte- 00:07:03.352 [2024-05-15 10:59:00.429386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.352 [2024-05-15 10:59:00.429415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.352 [2024-05-15 10:59:00.429444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.352 [2024-05-15 10:59:00.429460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.352 #30 NEW cov: 12114 ft: 14670 corp: 9/312b lim: 90 exec/s: 0 rss: 71Mb L: 38/73 MS: 5 InsertByte-InsertByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:03.352 [2024-05-15 10:59:00.469720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.352 [2024-05-15 10:59:00.469748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.352 [2024-05-15 10:59:00.469783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.352 [2024-05-15 10:59:00.469798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.352 [2024-05-15 10:59:00.469857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.352 [2024-05-15 10:59:00.469873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.352 #36 NEW cov: 12114 ft: 14988 corp: 10/380b lim: 90 exec/s: 0 rss: 71Mb L: 68/73 MS: 1 InsertRepeatedBytes- 00:07:03.352 [2024-05-15 10:59:00.519678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.352 [2024-05-15 10:59:00.519706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.352 [2024-05-15 10:59:00.519758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.352 [2024-05-15 10:59:00.519773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.352 #37 NEW cov: 12114 ft: 15062 corp: 11/418b lim: 90 exec/s: 0 rss: 71Mb L: 38/73 MS: 1 ChangeByte- 00:07:03.352 [2024-05-15 10:59:00.569813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.352 [2024-05-15 10:59:00.569843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.352 [2024-05-15 10:59:00.569892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.352 [2024-05-15 10:59:00.569908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.352 #38 NEW cov: 12114 ft: 15075 corp: 12/454b lim: 90 exec/s: 0 rss: 71Mb L: 36/73 MS: 1 ChangeBit- 00:07:03.353 [2024-05-15 10:59:00.609772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.353 [2024-05-15 10:59:00.609800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 #39 NEW cov: 12114 ft: 15120 corp: 13/485b lim: 90 exec/s: 0 rss: 71Mb L: 31/73 MS: 1 EraseBytes- 00:07:03.611 [2024-05-15 10:59:00.650073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.611 [2024-05-15 10:59:00.650100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.650130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.611 [2024-05-15 10:59:00.650146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.611 #40 NEW cov: 12114 ft: 15133 corp: 14/521b lim: 90 exec/s: 0 rss: 72Mb L: 36/73 MS: 1 ShuffleBytes- 00:07:03.611 [2024-05-15 10:59:00.690541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.611 [2024-05-15 10:59:00.690569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.690617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.611 [2024-05-15 10:59:00.690636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.690691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.611 [2024-05-15 10:59:00.690707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.690764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:03.611 [2024-05-15 10:59:00.690780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.611 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:03.611 #44 NEW cov: 12137 ft: 15214 corp: 15/610b lim: 90 exec/s: 0 rss: 72Mb L: 89/89 MS: 4 ChangeByte-InsertRepeatedBytes-ChangeBit-InsertRepeatedBytes- 00:07:03.611 [2024-05-15 10:59:00.730284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.611 [2024-05-15 10:59:00.730313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.730345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.611 [2024-05-15 10:59:00.730361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.611 #45 NEW cov: 12137 ft: 15247 corp: 16/646b lim: 90 exec/s: 0 rss: 72Mb L: 36/89 MS: 1 ChangeByte- 00:07:03.611 [2024-05-15 10:59:00.780425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.611 [2024-05-15 10:59:00.780455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.780487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.611 [2024-05-15 10:59:00.780503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.611 #46 NEW cov: 12137 ft: 15277 corp: 17/684b lim: 90 exec/s: 0 rss: 72Mb L: 38/89 MS: 1 ChangeByte- 00:07:03.611 [2024-05-15 10:59:00.830580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.611 [2024-05-15 10:59:00.830608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.830648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.611 [2024-05-15 10:59:00.830664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.611 #47 NEW cov: 12137 ft: 15339 corp: 18/722b lim: 90 exec/s: 47 rss: 72Mb L: 38/89 MS: 1 ChangeBinInt- 00:07:03.611 [2024-05-15 10:59:00.870704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.611 [2024-05-15 10:59:00.870732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.611 [2024-05-15 10:59:00.870763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.611 [2024-05-15 10:59:00.870778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.869 #48 NEW cov: 12137 ft: 15358 corp: 19/758b lim: 90 exec/s: 48 rss: 72Mb L: 36/89 MS: 1 ChangeBit- 00:07:03.869 [2024-05-15 10:59:00.910660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.869 [2024-05-15 10:59:00.910688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.869 #49 NEW cov: 12137 ft: 15370 corp: 20/787b lim: 90 exec/s: 49 rss: 72Mb L: 29/89 MS: 1 InsertByte- 00:07:03.869 [2024-05-15 10:59:00.950908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.869 [2024-05-15 10:59:00.950936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.869 [2024-05-15 10:59:00.950968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.869 [2024-05-15 10:59:00.950983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.869 #50 NEW cov: 12137 ft: 15385 corp: 21/823b lim: 90 exec/s: 50 rss: 72Mb L: 36/89 MS: 1 ShuffleBytes- 00:07:03.869 [2024-05-15 10:59:01.000882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.869 [2024-05-15 10:59:01.000909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.869 #51 NEW cov: 12137 ft: 15395 corp: 22/851b lim: 90 exec/s: 51 rss: 72Mb L: 28/89 MS: 1 CopyPart- 00:07:03.869 [2024-05-15 10:59:01.051161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.869 [2024-05-15 10:59:01.051189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.869 [2024-05-15 10:59:01.051231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.869 [2024-05-15 10:59:01.051246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.869 #52 NEW cov: 12137 ft: 15410 corp: 23/889b lim: 90 exec/s: 52 rss: 72Mb L: 38/89 MS: 1 CopyPart- 00:07:03.869 [2024-05-15 10:59:01.091296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.869 [2024-05-15 10:59:01.091323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.869 [2024-05-15 10:59:01.091359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.869 [2024-05-15 10:59:01.091375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.869 #53 NEW cov: 12137 ft: 15419 corp: 24/925b lim: 90 exec/s: 53 rss: 72Mb L: 36/89 MS: 1 ChangeByte- 00:07:03.869 [2024-05-15 10:59:01.131393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.869 [2024-05-15 10:59:01.131426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.869 [2024-05-15 10:59:01.131490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.869 [2024-05-15 10:59:01.131507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.127 #54 NEW cov: 12137 ft: 15459 corp: 25/961b lim: 90 exec/s: 54 rss: 72Mb L: 36/89 MS: 1 ShuffleBytes- 00:07:04.127 [2024-05-15 10:59:01.181582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.127 [2024-05-15 10:59:01.181610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.127 [2024-05-15 10:59:01.181645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.127 [2024-05-15 10:59:01.181659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.127 #55 NEW cov: 12137 ft: 15502 corp: 26/998b lim: 90 exec/s: 55 rss: 72Mb L: 37/89 MS: 1 InsertByte- 00:07:04.127 [2024-05-15 10:59:01.221541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.127 [2024-05-15 10:59:01.221569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.127 #56 NEW cov: 12137 ft: 15505 corp: 27/1029b lim: 90 exec/s: 56 rss: 73Mb L: 31/89 MS: 1 ChangeByte- 00:07:04.127 [2024-05-15 10:59:01.271815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.127 [2024-05-15 10:59:01.271844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.127 [2024-05-15 10:59:01.271875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.127 [2024-05-15 10:59:01.271890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.127 #57 NEW cov: 12137 ft: 15524 corp: 28/1067b lim: 90 exec/s: 57 rss: 73Mb L: 38/89 MS: 1 ChangeBinInt- 00:07:04.127 [2024-05-15 10:59:01.321967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.127 [2024-05-15 10:59:01.321995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.127 [2024-05-15 10:59:01.322024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.127 [2024-05-15 10:59:01.322040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.127 #58 NEW cov: 12137 ft: 15537 corp: 29/1106b lim: 90 exec/s: 58 rss: 73Mb L: 39/89 MS: 1 CMP- DE: "\000\000\000\000\377\377\377\377"- 00:07:04.127 [2024-05-15 10:59:01.371929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.127 [2024-05-15 10:59:01.371957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.385 #59 NEW cov: 12137 ft: 15552 corp: 30/1135b lim: 90 exec/s: 59 rss: 73Mb L: 29/89 MS: 1 ShuffleBytes- 00:07:04.385 [2024-05-15 10:59:01.422099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.385 [2024-05-15 10:59:01.422128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.385 #60 NEW cov: 12137 ft: 15567 corp: 31/1168b lim: 90 exec/s: 60 rss: 73Mb L: 33/89 MS: 1 CrossOver- 00:07:04.385 [2024-05-15 10:59:01.472390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.385 [2024-05-15 10:59:01.472420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.385 [2024-05-15 10:59:01.472450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.385 [2024-05-15 10:59:01.472466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.385 #61 NEW cov: 12137 ft: 15592 corp: 32/1204b lim: 90 exec/s: 61 rss: 73Mb L: 36/89 MS: 1 ChangeByte- 00:07:04.385 [2024-05-15 10:59:01.522532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.385 [2024-05-15 10:59:01.522562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.385 [2024-05-15 10:59:01.522604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.385 [2024-05-15 10:59:01.522619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.385 #62 NEW cov: 12137 ft: 15642 corp: 33/1240b lim: 90 exec/s: 62 rss: 73Mb L: 36/89 MS: 1 ShuffleBytes- 00:07:04.385 [2024-05-15 10:59:01.562625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.385 [2024-05-15 10:59:01.562654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.385 [2024-05-15 10:59:01.562717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.385 [2024-05-15 10:59:01.562732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.385 #63 NEW cov: 12137 ft: 15657 corp: 34/1276b lim: 90 exec/s: 63 rss: 73Mb L: 36/89 MS: 1 ChangeBit- 00:07:04.385 [2024-05-15 10:59:01.602867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.385 [2024-05-15 10:59:01.602895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.385 [2024-05-15 10:59:01.602936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.385 [2024-05-15 10:59:01.602951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.385 [2024-05-15 10:59:01.603010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.385 [2024-05-15 10:59:01.603027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.385 #64 NEW cov: 12137 ft: 15675 corp: 35/1344b lim: 90 exec/s: 64 rss: 74Mb L: 68/89 MS: 1 ChangeByte- 00:07:04.644 [2024-05-15 10:59:01.652836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.644 [2024-05-15 10:59:01.652867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.644 [2024-05-15 10:59:01.652940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.644 [2024-05-15 10:59:01.652959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.644 #65 NEW cov: 12137 ft: 15694 corp: 36/1380b lim: 90 exec/s: 65 rss: 74Mb L: 36/89 MS: 1 ShuffleBytes- 00:07:04.644 [2024-05-15 10:59:01.703008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.644 [2024-05-15 10:59:01.703035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.644 [2024-05-15 10:59:01.703090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.644 [2024-05-15 10:59:01.703104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.644 #66 NEW cov: 12137 ft: 15708 corp: 37/1418b lim: 90 exec/s: 66 rss: 74Mb L: 38/89 MS: 1 EraseBytes- 00:07:04.644 [2024-05-15 10:59:01.753309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.644 [2024-05-15 10:59:01.753337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.644 [2024-05-15 10:59:01.753371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.644 [2024-05-15 10:59:01.753393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.644 [2024-05-15 10:59:01.753447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.644 [2024-05-15 10:59:01.753462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.644 #67 NEW cov: 12137 ft: 15730 corp: 38/1474b lim: 90 exec/s: 67 rss: 74Mb L: 56/89 MS: 1 CrossOver- 00:07:04.644 [2024-05-15 10:59:01.803294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.644 [2024-05-15 10:59:01.803322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.644 [2024-05-15 10:59:01.803367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.644 [2024-05-15 10:59:01.803387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.644 #68 NEW cov: 12137 ft: 15782 corp: 39/1510b lim: 90 exec/s: 34 rss: 74Mb L: 36/89 MS: 1 PersAutoDict- DE: "\000\000\000\000\377\377\377\377"- 00:07:04.644 #68 DONE cov: 12137 ft: 15782 corp: 39/1510b lim: 90 exec/s: 34 rss: 74Mb 00:07:04.644 ###### Recommended dictionary. ###### 00:07:04.644 "\000\000\000\000\377\377\377\377" # Uses: 1 00:07:04.644 ###### End of recommended dictionary. ###### 00:07:04.644 Done 68 runs in 2 second(s) 00:07:04.645 [2024-05-15 10:59:01.832632] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.903 10:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:04.903 [2024-05-15 10:59:02.001403] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:04.903 [2024-05-15 10:59:02.001475] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409302 ] 00:07:04.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.160 [2024-05-15 10:59:02.253152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.160 [2024-05-15 10:59:02.345009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.160 [2024-05-15 10:59:02.404189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.160 [2024-05-15 10:59:02.420144] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:05.160 [2024-05-15 10:59:02.420562] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:05.417 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.417 INFO: Seed: 4052105975 00:07:05.417 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:07:05.417 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:07:05.417 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:05.417 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.417 #2 INITED exec/s: 0 rss: 63Mb 00:07:05.417 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.417 This may also happen if the target rejected all inputs we tried so far 00:07:05.417 [2024-05-15 10:59:02.465684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.417 [2024-05-15 10:59:02.465713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.417 [2024-05-15 10:59:02.465767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.417 [2024-05-15 10:59:02.465783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.673 NEW_FUNC[1/687]: 0x4a71f0 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:05.673 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.673 #11 NEW cov: 11858 ft: 11860 corp: 2/27b lim: 50 exec/s: 0 rss: 70Mb L: 26/26 MS: 4 InsertByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:05.673 [2024-05-15 10:59:02.796802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.673 [2024-05-15 10:59:02.796838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.673 [2024-05-15 10:59:02.796897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.673 [2024-05-15 10:59:02.796913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.796969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.674 [2024-05-15 10:59:02.796985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.674 #12 NEW cov: 11998 ft: 12759 corp: 3/66b lim: 50 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:05.674 [2024-05-15 10:59:02.847024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.674 [2024-05-15 10:59:02.847055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.847090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.674 [2024-05-15 10:59:02.847105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.847165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.674 [2024-05-15 10:59:02.847181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.847238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.674 [2024-05-15 10:59:02.847252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.674 #13 NEW cov: 12004 ft: 13395 corp: 4/106b lim: 50 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:05.674 [2024-05-15 10:59:02.887120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.674 [2024-05-15 10:59:02.887150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.887184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.674 [2024-05-15 10:59:02.887199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.887256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.674 [2024-05-15 10:59:02.887273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.887333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.674 [2024-05-15 10:59:02.887349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.674 #14 NEW cov: 12089 ft: 13706 corp: 5/146b lim: 50 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:05.674 [2024-05-15 10:59:02.936943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.674 [2024-05-15 10:59:02.936972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.674 [2024-05-15 10:59:02.937004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.674 [2024-05-15 10:59:02.937019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.931 #15 NEW cov: 12089 ft: 13889 corp: 6/172b lim: 50 exec/s: 0 rss: 70Mb L: 26/40 MS: 1 ChangeBit- 00:07:05.931 [2024-05-15 10:59:02.977387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.931 [2024-05-15 10:59:02.977418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:02.977461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.931 [2024-05-15 10:59:02.977477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:02.977535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.931 [2024-05-15 10:59:02.977549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:02.977608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.931 [2024-05-15 10:59:02.977624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.931 #16 NEW cov: 12089 ft: 13932 corp: 7/212b lim: 50 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:05.931 [2024-05-15 10:59:03.017143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.931 [2024-05-15 10:59:03.017174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.017217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.931 [2024-05-15 10:59:03.017232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.931 #17 NEW cov: 12089 ft: 13999 corp: 8/237b lim: 50 exec/s: 0 rss: 70Mb L: 25/40 MS: 1 EraseBytes- 00:07:05.931 [2024-05-15 10:59:03.067658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.931 [2024-05-15 10:59:03.067687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.067722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.931 [2024-05-15 10:59:03.067738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.067798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.931 [2024-05-15 10:59:03.067813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.067872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.931 [2024-05-15 10:59:03.067889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.931 #18 NEW cov: 12089 ft: 14044 corp: 9/277b lim: 50 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:05.931 [2024-05-15 10:59:03.117456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.931 [2024-05-15 10:59:03.117484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.117518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.931 [2024-05-15 10:59:03.117534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.931 #19 NEW cov: 12089 ft: 14132 corp: 10/305b lim: 50 exec/s: 0 rss: 71Mb L: 28/40 MS: 1 CopyPart- 00:07:05.931 [2024-05-15 10:59:03.167888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.931 [2024-05-15 10:59:03.167917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.167959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.931 [2024-05-15 10:59:03.167975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.168032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.931 [2024-05-15 10:59:03.168048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.931 [2024-05-15 10:59:03.168108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.931 [2024-05-15 10:59:03.168123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.189 #20 NEW cov: 12089 ft: 14203 corp: 11/345b lim: 50 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CrossOver- 00:07:06.189 [2024-05-15 10:59:03.218007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.189 [2024-05-15 10:59:03.218038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.218075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.189 [2024-05-15 10:59:03.218092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.218149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.189 [2024-05-15 10:59:03.218166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.218224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.189 [2024-05-15 10:59:03.218238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.189 #21 NEW cov: 12089 ft: 14232 corp: 12/391b lim: 50 exec/s: 0 rss: 71Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:07:06.189 [2024-05-15 10:59:03.258118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.189 [2024-05-15 10:59:03.258147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.258181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.189 [2024-05-15 10:59:03.258196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.258252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.189 [2024-05-15 10:59:03.258269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.258325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.189 [2024-05-15 10:59:03.258340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.189 #22 NEW cov: 12089 ft: 14244 corp: 13/436b lim: 50 exec/s: 0 rss: 71Mb L: 45/46 MS: 1 InsertRepeatedBytes- 00:07:06.189 [2024-05-15 10:59:03.298310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.189 [2024-05-15 10:59:03.298339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.298373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.189 [2024-05-15 10:59:03.298394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.298448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.189 [2024-05-15 10:59:03.298463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.298523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.189 [2024-05-15 10:59:03.298539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.189 #23 NEW cov: 12089 ft: 14262 corp: 14/476b lim: 50 exec/s: 0 rss: 71Mb L: 40/46 MS: 1 ShuffleBytes- 00:07:06.189 [2024-05-15 10:59:03.338332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.189 [2024-05-15 10:59:03.338361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.338415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.189 [2024-05-15 10:59:03.338432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.338488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.189 [2024-05-15 10:59:03.338505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.338565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.189 [2024-05-15 10:59:03.338580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.189 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:06.189 #24 NEW cov: 12112 ft: 14329 corp: 15/516b lim: 50 exec/s: 0 rss: 71Mb L: 40/46 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:06.189 [2024-05-15 10:59:03.388522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.189 [2024-05-15 10:59:03.388551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.388595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.189 [2024-05-15 10:59:03.388611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.388667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.189 [2024-05-15 10:59:03.388684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.388740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.189 [2024-05-15 10:59:03.388756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.189 #25 NEW cov: 12112 ft: 14385 corp: 16/565b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:06.189 [2024-05-15 10:59:03.428674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.189 [2024-05-15 10:59:03.428703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.428744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.189 [2024-05-15 10:59:03.428760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.428816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.189 [2024-05-15 10:59:03.428833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.189 [2024-05-15 10:59:03.428890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.189 [2024-05-15 10:59:03.428906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.447 #26 NEW cov: 12112 ft: 14498 corp: 17/613b lim: 50 exec/s: 26 rss: 71Mb L: 48/49 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:06.447 [2024-05-15 10:59:03.478639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.447 [2024-05-15 10:59:03.478668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.478699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.447 [2024-05-15 10:59:03.478715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.478772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.447 [2024-05-15 10:59:03.478787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.447 #27 NEW cov: 12112 ft: 14524 corp: 18/652b lim: 50 exec/s: 27 rss: 71Mb L: 39/49 MS: 1 ChangeBit- 00:07:06.447 [2024-05-15 10:59:03.528881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.447 [2024-05-15 10:59:03.528910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.528954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.447 [2024-05-15 10:59:03.528975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.529032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.447 [2024-05-15 10:59:03.529048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.529107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.447 [2024-05-15 10:59:03.529123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.447 #28 NEW cov: 12112 ft: 14536 corp: 19/693b lim: 50 exec/s: 28 rss: 71Mb L: 41/49 MS: 1 InsertByte- 00:07:06.447 [2024-05-15 10:59:03.568992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.447 [2024-05-15 10:59:03.569021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.569071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.447 [2024-05-15 10:59:03.569088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.569145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.447 [2024-05-15 10:59:03.569160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.569220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.447 [2024-05-15 10:59:03.569236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.447 #29 NEW cov: 12112 ft: 14547 corp: 20/738b lim: 50 exec/s: 29 rss: 71Mb L: 45/49 MS: 1 ChangeBit- 00:07:06.447 [2024-05-15 10:59:03.619218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.447 [2024-05-15 10:59:03.619247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.619280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.447 [2024-05-15 10:59:03.619296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.619353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.447 [2024-05-15 10:59:03.619369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.619435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.447 [2024-05-15 10:59:03.619451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.447 #30 NEW cov: 12112 ft: 14579 corp: 21/778b lim: 50 exec/s: 30 rss: 72Mb L: 40/49 MS: 1 CopyPart- 00:07:06.447 [2024-05-15 10:59:03.659066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.447 [2024-05-15 10:59:03.659094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.659127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.447 [2024-05-15 10:59:03.659142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.659200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.447 [2024-05-15 10:59:03.659220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.447 #31 NEW cov: 12112 ft: 14605 corp: 22/808b lim: 50 exec/s: 31 rss: 72Mb L: 30/49 MS: 1 CMP- DE: "\001\000\000\034"- 00:07:06.447 [2024-05-15 10:59:03.709116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.447 [2024-05-15 10:59:03.709160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.447 [2024-05-15 10:59:03.709198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.447 [2024-05-15 10:59:03.709214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.705 #32 NEW cov: 12112 ft: 14624 corp: 23/832b lim: 50 exec/s: 32 rss: 72Mb L: 24/49 MS: 1 CrossOver- 00:07:06.705 [2024-05-15 10:59:03.749367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.705 [2024-05-15 10:59:03.749401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.705 [2024-05-15 10:59:03.749431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.705 [2024-05-15 10:59:03.749446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.705 [2024-05-15 10:59:03.749507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.705 [2024-05-15 10:59:03.749523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.705 #33 NEW cov: 12112 ft: 14641 corp: 24/866b lim: 50 exec/s: 33 rss: 72Mb L: 34/49 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:06.705 [2024-05-15 10:59:03.789680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.705 [2024-05-15 10:59:03.789709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.705 [2024-05-15 10:59:03.789755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.705 [2024-05-15 10:59:03.789771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.705 [2024-05-15 10:59:03.789829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.705 [2024-05-15 10:59:03.789846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.705 [2024-05-15 10:59:03.789908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.705 [2024-05-15 10:59:03.789924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.706 #34 NEW cov: 12112 ft: 14659 corp: 25/915b lim: 50 exec/s: 34 rss: 72Mb L: 49/49 MS: 1 PersAutoDict- DE: "\001\000\000\034"- 00:07:06.706 [2024-05-15 10:59:03.829733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.706 [2024-05-15 10:59:03.829759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.829807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.706 [2024-05-15 10:59:03.829823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.829881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.706 [2024-05-15 10:59:03.829899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.829958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.706 [2024-05-15 10:59:03.829975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.706 #35 NEW cov: 12112 ft: 14678 corp: 26/962b lim: 50 exec/s: 35 rss: 72Mb L: 47/49 MS: 1 CopyPart- 00:07:06.706 [2024-05-15 10:59:03.879948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.706 [2024-05-15 10:59:03.879976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.880013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.706 [2024-05-15 10:59:03.880028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.880084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.706 [2024-05-15 10:59:03.880098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.880156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.706 [2024-05-15 10:59:03.880172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.706 #36 NEW cov: 12112 ft: 14708 corp: 27/1011b lim: 50 exec/s: 36 rss: 72Mb L: 49/49 MS: 1 ChangeBinInt- 00:07:06.706 [2024-05-15 10:59:03.930048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.706 [2024-05-15 10:59:03.930075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.930111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.706 [2024-05-15 10:59:03.930127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.930185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.706 [2024-05-15 10:59:03.930201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.930261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.706 [2024-05-15 10:59:03.930277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.706 #42 NEW cov: 12112 ft: 14722 corp: 28/1051b lim: 50 exec/s: 42 rss: 72Mb L: 40/49 MS: 1 ChangeBit- 00:07:06.706 [2024-05-15 10:59:03.970158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.706 [2024-05-15 10:59:03.970188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.970237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.706 [2024-05-15 10:59:03.970253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.970312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.706 [2024-05-15 10:59:03.970328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.706 [2024-05-15 10:59:03.970394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.706 [2024-05-15 10:59:03.970413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.964 #43 NEW cov: 12112 ft: 14733 corp: 29/1091b lim: 50 exec/s: 43 rss: 72Mb L: 40/49 MS: 1 CrossOver- 00:07:06.964 [2024-05-15 10:59:04.020282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.964 [2024-05-15 10:59:04.020310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.020358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.964 [2024-05-15 10:59:04.020374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.020439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.964 [2024-05-15 10:59:04.020456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.020513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.964 [2024-05-15 10:59:04.020528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.964 #44 NEW cov: 12112 ft: 14745 corp: 30/1132b lim: 50 exec/s: 44 rss: 72Mb L: 41/49 MS: 1 InsertByte- 00:07:06.964 [2024-05-15 10:59:04.060220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.964 [2024-05-15 10:59:04.060248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.060283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.964 [2024-05-15 10:59:04.060299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.060356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.964 [2024-05-15 10:59:04.060372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.964 #45 NEW cov: 12112 ft: 14757 corp: 31/1167b lim: 50 exec/s: 45 rss: 72Mb L: 35/49 MS: 1 InsertRepeatedBytes- 00:07:06.964 [2024-05-15 10:59:04.110537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.964 [2024-05-15 10:59:04.110567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.110607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.964 [2024-05-15 10:59:04.110623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.110681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.964 [2024-05-15 10:59:04.110697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.110757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.964 [2024-05-15 10:59:04.110772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.964 #46 NEW cov: 12112 ft: 14768 corp: 32/1208b lim: 50 exec/s: 46 rss: 72Mb L: 41/49 MS: 1 InsertByte- 00:07:06.964 [2024-05-15 10:59:04.150654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.964 [2024-05-15 10:59:04.150682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.150722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.964 [2024-05-15 10:59:04.150737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.150794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.964 [2024-05-15 10:59:04.150810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.150868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.964 [2024-05-15 10:59:04.150883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.964 #47 NEW cov: 12112 ft: 14790 corp: 33/1249b lim: 50 exec/s: 47 rss: 72Mb L: 41/49 MS: 1 InsertByte- 00:07:06.964 [2024-05-15 10:59:04.190783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.964 [2024-05-15 10:59:04.190812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.190855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.964 [2024-05-15 10:59:04.190873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.964 [2024-05-15 10:59:04.190932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.965 [2024-05-15 10:59:04.190949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.965 [2024-05-15 10:59:04.191006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.965 [2024-05-15 10:59:04.191022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.965 #48 NEW cov: 12112 ft: 14800 corp: 34/1294b lim: 50 exec/s: 48 rss: 72Mb L: 45/49 MS: 1 CopyPart- 00:07:07.222 [2024-05-15 10:59:04.240784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.222 [2024-05-15 10:59:04.240813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.240848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.223 [2024-05-15 10:59:04.240864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.240924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.223 [2024-05-15 10:59:04.240941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.223 #49 NEW cov: 12112 ft: 14817 corp: 35/1333b lim: 50 exec/s: 49 rss: 72Mb L: 39/49 MS: 1 ChangeBinInt- 00:07:07.223 [2024-05-15 10:59:04.290740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.223 [2024-05-15 10:59:04.290768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.290799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.223 [2024-05-15 10:59:04.290815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.223 #50 NEW cov: 12112 ft: 14844 corp: 36/1361b lim: 50 exec/s: 50 rss: 73Mb L: 28/49 MS: 1 EraseBytes- 00:07:07.223 [2024-05-15 10:59:04.340930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.223 [2024-05-15 10:59:04.340962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.341008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.223 [2024-05-15 10:59:04.341025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.223 #51 NEW cov: 12112 ft: 14894 corp: 37/1386b lim: 50 exec/s: 51 rss: 73Mb L: 25/49 MS: 1 CopyPart- 00:07:07.223 [2024-05-15 10:59:04.381170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.223 [2024-05-15 10:59:04.381197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.381248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.223 [2024-05-15 10:59:04.381263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.381321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.223 [2024-05-15 10:59:04.381337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.223 #52 NEW cov: 12112 ft: 14904 corp: 38/1425b lim: 50 exec/s: 52 rss: 73Mb L: 39/49 MS: 1 ChangeBit- 00:07:07.223 [2024-05-15 10:59:04.421432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.223 [2024-05-15 10:59:04.421461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.421509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.223 [2024-05-15 10:59:04.421524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.421578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.223 [2024-05-15 10:59:04.421595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.421657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.223 [2024-05-15 10:59:04.421672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.223 #53 NEW cov: 12112 ft: 14933 corp: 39/1470b lim: 50 exec/s: 53 rss: 73Mb L: 45/49 MS: 1 ShuffleBytes- 00:07:07.223 [2024-05-15 10:59:04.471623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.223 [2024-05-15 10:59:04.471651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.471699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.223 [2024-05-15 10:59:04.471715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.471774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.223 [2024-05-15 10:59:04.471791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.223 [2024-05-15 10:59:04.471849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.223 [2024-05-15 10:59:04.471865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.480 #54 NEW cov: 12112 ft: 14938 corp: 40/1510b lim: 50 exec/s: 27 rss: 73Mb L: 40/49 MS: 1 ChangeByte- 00:07:07.480 #54 DONE cov: 12112 ft: 14938 corp: 40/1510b lim: 50 exec/s: 27 rss: 73Mb 00:07:07.480 ###### Recommended dictionary. ###### 00:07:07.481 "\001\000\000\000\000\000\000\000" # Uses: 2 00:07:07.481 "\001\000\000\034" # Uses: 1 00:07:07.481 ###### End of recommended dictionary. ###### 00:07:07.481 Done 54 runs in 2 second(s) 00:07:07.481 [2024-05-15 10:59:04.492063] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.481 10:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:07.481 [2024-05-15 10:59:04.661934] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:07.481 [2024-05-15 10:59:04.662003] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409630 ] 00:07:07.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.738 [2024-05-15 10:59:04.913117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.738 [2024-05-15 10:59:05.000865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.996 [2024-05-15 10:59:05.060137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.996 [2024-05-15 10:59:05.076091] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:07.996 [2024-05-15 10:59:05.076518] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:07.996 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.996 INFO: Seed: 2411138315 00:07:07.996 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:07:07.996 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:07:07.996 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:07.996 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.996 #2 INITED exec/s: 0 rss: 63Mb 00:07:07.996 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.996 This may also happen if the target rejected all inputs we tried so far 00:07:07.996 [2024-05-15 10:59:05.124775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.996 [2024-05-15 10:59:05.124804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.254 NEW_FUNC[1/686]: 0x4a94b0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:08.254 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:08.254 #8 NEW cov: 11886 ft: 11887 corp: 2/30b lim: 85 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:08.254 [2024-05-15 10:59:05.435413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.254 [2024-05-15 10:59:05.435448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.254 NEW_FUNC[1/1]: 0x1a51a40 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:712 00:07:08.254 #19 NEW cov: 12024 ft: 12363 corp: 3/59b lim: 85 exec/s: 0 rss: 71Mb L: 29/29 MS: 1 ChangeBit- 00:07:08.254 [2024-05-15 10:59:05.485628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.254 [2024-05-15 10:59:05.485657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.254 [2024-05-15 10:59:05.485687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.254 [2024-05-15 10:59:05.485703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.254 #20 NEW cov: 12030 ft: 13519 corp: 4/94b lim: 85 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:08.512 [2024-05-15 10:59:05.525756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.512 [2024-05-15 10:59:05.525785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.512 [2024-05-15 10:59:05.525818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.512 [2024-05-15 10:59:05.525834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.512 #21 NEW cov: 12115 ft: 13749 corp: 5/144b lim: 85 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:08.512 [2024-05-15 10:59:05.575940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.512 [2024-05-15 10:59:05.575966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.512 [2024-05-15 10:59:05.575999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.512 [2024-05-15 10:59:05.576015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.512 #22 NEW cov: 12115 ft: 13874 corp: 6/180b lim: 85 exec/s: 0 rss: 71Mb L: 36/50 MS: 1 InsertByte- 00:07:08.512 [2024-05-15 10:59:05.615994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.512 [2024-05-15 10:59:05.616021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.512 [2024-05-15 10:59:05.616063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.512 [2024-05-15 10:59:05.616079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.512 #23 NEW cov: 12115 ft: 13944 corp: 7/215b lim: 85 exec/s: 0 rss: 71Mb L: 35/50 MS: 1 ShuffleBytes- 00:07:08.512 [2024-05-15 10:59:05.655988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.512 [2024-05-15 10:59:05.656015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.512 #25 NEW cov: 12115 ft: 14058 corp: 8/237b lim: 85 exec/s: 0 rss: 71Mb L: 22/50 MS: 2 InsertByte-CrossOver- 00:07:08.512 [2024-05-15 10:59:05.696108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.512 [2024-05-15 10:59:05.696136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.512 #26 NEW cov: 12115 ft: 14149 corp: 9/266b lim: 85 exec/s: 0 rss: 71Mb L: 29/50 MS: 1 ChangeBinInt- 00:07:08.512 [2024-05-15 10:59:05.736682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.512 [2024-05-15 10:59:05.736710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.512 [2024-05-15 10:59:05.736750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.512 [2024-05-15 10:59:05.736765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.512 [2024-05-15 10:59:05.736816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:08.512 [2024-05-15 10:59:05.736831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.512 [2024-05-15 10:59:05.736883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:08.512 [2024-05-15 10:59:05.736897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.512 #27 NEW cov: 12115 ft: 14616 corp: 10/336b lim: 85 exec/s: 0 rss: 71Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:07:08.770 [2024-05-15 10:59:05.786350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.770 [2024-05-15 10:59:05.786378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.770 #28 NEW cov: 12115 ft: 14667 corp: 11/356b lim: 85 exec/s: 0 rss: 71Mb L: 20/70 MS: 1 InsertRepeatedBytes- 00:07:08.770 [2024-05-15 10:59:05.826571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.770 [2024-05-15 10:59:05.826599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.770 [2024-05-15 10:59:05.826628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.770 [2024-05-15 10:59:05.826643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.770 #29 NEW cov: 12115 ft: 14707 corp: 12/392b lim: 85 exec/s: 0 rss: 71Mb L: 36/70 MS: 1 ChangeBit- 00:07:08.770 [2024-05-15 10:59:05.876728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.770 [2024-05-15 10:59:05.876755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.770 [2024-05-15 10:59:05.876785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.770 [2024-05-15 10:59:05.876801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.770 #30 NEW cov: 12115 ft: 14738 corp: 13/428b lim: 85 exec/s: 0 rss: 71Mb L: 36/70 MS: 1 ChangeBinInt- 00:07:08.770 [2024-05-15 10:59:05.926888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.770 [2024-05-15 10:59:05.926916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.770 [2024-05-15 10:59:05.926945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.770 [2024-05-15 10:59:05.926960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.770 #31 NEW cov: 12115 ft: 14800 corp: 14/464b lim: 85 exec/s: 0 rss: 72Mb L: 36/70 MS: 1 ChangeByte- 00:07:08.770 [2024-05-15 10:59:05.967039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.770 [2024-05-15 10:59:05.967066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.770 [2024-05-15 10:59:05.967119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.770 [2024-05-15 10:59:05.967135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.770 #32 NEW cov: 12115 ft: 14817 corp: 15/505b lim: 85 exec/s: 0 rss: 72Mb L: 41/70 MS: 1 EraseBytes- 00:07:08.770 [2024-05-15 10:59:06.017133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.770 [2024-05-15 10:59:06.017162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.770 [2024-05-15 10:59:06.017203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.770 [2024-05-15 10:59:06.017219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.028 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:09.028 #33 NEW cov: 12138 ft: 14899 corp: 16/540b lim: 85 exec/s: 0 rss: 72Mb L: 35/70 MS: 1 EraseBytes- 00:07:09.028 [2024-05-15 10:59:06.067183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.028 [2024-05-15 10:59:06.067211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.028 #34 NEW cov: 12138 ft: 14947 corp: 17/569b lim: 85 exec/s: 0 rss: 72Mb L: 29/70 MS: 1 ChangeBinInt- 00:07:09.028 [2024-05-15 10:59:06.117443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.028 [2024-05-15 10:59:06.117470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.028 [2024-05-15 10:59:06.117517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.028 [2024-05-15 10:59:06.117532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.028 #35 NEW cov: 12138 ft: 15001 corp: 18/610b lim: 85 exec/s: 35 rss: 72Mb L: 41/70 MS: 1 ChangeBit- 00:07:09.028 [2024-05-15 10:59:06.167410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.028 [2024-05-15 10:59:06.167439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.028 #36 NEW cov: 12138 ft: 15038 corp: 19/639b lim: 85 exec/s: 36 rss: 72Mb L: 29/70 MS: 1 ChangeByte- 00:07:09.028 [2024-05-15 10:59:06.217976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.028 [2024-05-15 10:59:06.218004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.028 [2024-05-15 10:59:06.218043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.028 [2024-05-15 10:59:06.218060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.028 [2024-05-15 10:59:06.218111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.028 [2024-05-15 10:59:06.218126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.028 [2024-05-15 10:59:06.218176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.028 [2024-05-15 10:59:06.218192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.028 #37 NEW cov: 12138 ft: 15059 corp: 20/709b lim: 85 exec/s: 37 rss: 72Mb L: 70/70 MS: 1 ChangeBit- 00:07:09.028 [2024-05-15 10:59:06.257652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.028 [2024-05-15 10:59:06.257680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.028 #38 NEW cov: 12138 ft: 15077 corp: 21/728b lim: 85 exec/s: 38 rss: 72Mb L: 19/70 MS: 1 EraseBytes- 00:07:09.286 [2024-05-15 10:59:06.308260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.286 [2024-05-15 10:59:06.308289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.286 [2024-05-15 10:59:06.308326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.286 [2024-05-15 10:59:06.308341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.286 [2024-05-15 10:59:06.308397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.286 [2024-05-15 10:59:06.308412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.286 [2024-05-15 10:59:06.308465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.286 [2024-05-15 10:59:06.308478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.286 #39 NEW cov: 12138 ft: 15100 corp: 22/801b lim: 85 exec/s: 39 rss: 72Mb L: 73/73 MS: 1 CrossOver- 00:07:09.286 [2024-05-15 10:59:06.348090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.286 [2024-05-15 10:59:06.348118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.286 [2024-05-15 10:59:06.348147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.286 [2024-05-15 10:59:06.348161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.286 #40 NEW cov: 12138 ft: 15114 corp: 23/837b lim: 85 exec/s: 40 rss: 72Mb L: 36/73 MS: 1 InsertByte- 00:07:09.286 [2024-05-15 10:59:06.398233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.286 [2024-05-15 10:59:06.398260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.286 [2024-05-15 10:59:06.398309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.286 [2024-05-15 10:59:06.398323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.286 #41 NEW cov: 12138 ft: 15128 corp: 24/878b lim: 85 exec/s: 41 rss: 73Mb L: 41/73 MS: 1 ChangeByte- 00:07:09.286 [2024-05-15 10:59:06.438139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.286 [2024-05-15 10:59:06.438170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.286 #42 NEW cov: 12138 ft: 15181 corp: 25/907b lim: 85 exec/s: 42 rss: 73Mb L: 29/73 MS: 1 ChangeByte- 00:07:09.286 [2024-05-15 10:59:06.478276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.286 [2024-05-15 10:59:06.478303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.286 #43 NEW cov: 12138 ft: 15194 corp: 26/936b lim: 85 exec/s: 43 rss: 73Mb L: 29/73 MS: 1 ChangeBit- 00:07:09.286 [2024-05-15 10:59:06.528573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.286 [2024-05-15 10:59:06.528601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.286 [2024-05-15 10:59:06.528635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.286 [2024-05-15 10:59:06.528651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.544 #44 NEW cov: 12138 ft: 15211 corp: 27/978b lim: 85 exec/s: 44 rss: 73Mb L: 42/73 MS: 1 InsertByte- 00:07:09.544 [2024-05-15 10:59:06.578754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.544 [2024-05-15 10:59:06.578782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.578829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.544 [2024-05-15 10:59:06.578844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.544 #45 NEW cov: 12138 ft: 15226 corp: 28/1019b lim: 85 exec/s: 45 rss: 73Mb L: 41/73 MS: 1 ChangeByte- 00:07:09.544 [2024-05-15 10:59:06.618650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.544 [2024-05-15 10:59:06.618679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.544 #46 NEW cov: 12138 ft: 15252 corp: 29/1038b lim: 85 exec/s: 46 rss: 73Mb L: 19/73 MS: 1 ChangeBit- 00:07:09.544 [2024-05-15 10:59:06.668959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.544 [2024-05-15 10:59:06.668987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.669016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.544 [2024-05-15 10:59:06.669032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.544 #47 NEW cov: 12138 ft: 15261 corp: 30/1079b lim: 85 exec/s: 47 rss: 73Mb L: 41/73 MS: 1 ChangeBit- 00:07:09.544 [2024-05-15 10:59:06.709056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.544 [2024-05-15 10:59:06.709085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.709115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.544 [2024-05-15 10:59:06.709129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.544 #48 NEW cov: 12138 ft: 15292 corp: 31/1120b lim: 85 exec/s: 48 rss: 73Mb L: 41/73 MS: 1 ShuffleBytes- 00:07:09.544 [2024-05-15 10:59:06.759673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.544 [2024-05-15 10:59:06.759700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.759743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.544 [2024-05-15 10:59:06.759758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.759808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.544 [2024-05-15 10:59:06.759823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.759873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.544 [2024-05-15 10:59:06.759887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.759938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:09.544 [2024-05-15 10:59:06.759953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:09.544 #49 NEW cov: 12138 ft: 15337 corp: 32/1205b lim: 85 exec/s: 49 rss: 73Mb L: 85/85 MS: 1 CopyPart- 00:07:09.544 [2024-05-15 10:59:06.799308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.544 [2024-05-15 10:59:06.799336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.544 [2024-05-15 10:59:06.799374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.544 [2024-05-15 10:59:06.799394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.801 #50 NEW cov: 12138 ft: 15364 corp: 33/1241b lim: 85 exec/s: 50 rss: 73Mb L: 36/85 MS: 1 CMP- DE: "\001\000"- 00:07:09.801 [2024-05-15 10:59:06.839253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.801 [2024-05-15 10:59:06.839280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.801 #51 NEW cov: 12138 ft: 15382 corp: 34/1272b lim: 85 exec/s: 51 rss: 73Mb L: 31/85 MS: 1 EraseBytes- 00:07:09.801 [2024-05-15 10:59:06.889448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.801 [2024-05-15 10:59:06.889475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.801 #52 NEW cov: 12138 ft: 15428 corp: 35/1303b lim: 85 exec/s: 52 rss: 73Mb L: 31/85 MS: 1 ChangeBit- 00:07:09.801 [2024-05-15 10:59:06.940144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.801 [2024-05-15 10:59:06.940172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.940216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.801 [2024-05-15 10:59:06.940232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.940283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.801 [2024-05-15 10:59:06.940298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.940349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.801 [2024-05-15 10:59:06.940363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.940421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:09.801 [2024-05-15 10:59:06.940440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:09.801 #53 NEW cov: 12138 ft: 15509 corp: 36/1388b lim: 85 exec/s: 53 rss: 73Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:09.801 [2024-05-15 10:59:06.990163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.801 [2024-05-15 10:59:06.990190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.990223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.801 [2024-05-15 10:59:06.990238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.990289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.801 [2024-05-15 10:59:06.990304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:06.990355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.801 [2024-05-15 10:59:06.990370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.801 #54 NEW cov: 12138 ft: 15521 corp: 37/1459b lim: 85 exec/s: 54 rss: 74Mb L: 71/85 MS: 1 EraseBytes- 00:07:09.801 [2024-05-15 10:59:07.039979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.801 [2024-05-15 10:59:07.040005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.801 [2024-05-15 10:59:07.040045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.801 [2024-05-15 10:59:07.040061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.801 #55 NEW cov: 12138 ft: 15522 corp: 38/1500b lim: 85 exec/s: 55 rss: 74Mb L: 41/85 MS: 1 ChangeBit- 00:07:10.060 [2024-05-15 10:59:07.080093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.060 [2024-05-15 10:59:07.080122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.060 [2024-05-15 10:59:07.080151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.060 [2024-05-15 10:59:07.080165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.060 #56 NEW cov: 12138 ft: 15537 corp: 39/1541b lim: 85 exec/s: 56 rss: 74Mb L: 41/85 MS: 1 ChangeByte- 00:07:10.060 [2024-05-15 10:59:07.120078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.060 [2024-05-15 10:59:07.120105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.060 #57 NEW cov: 12138 ft: 15541 corp: 40/1570b lim: 85 exec/s: 28 rss: 74Mb L: 29/85 MS: 1 ChangeBinInt- 00:07:10.060 #57 DONE cov: 12138 ft: 15541 corp: 40/1570b lim: 85 exec/s: 28 rss: 74Mb 00:07:10.060 ###### Recommended dictionary. ###### 00:07:10.060 "\001\000" # Uses: 0 00:07:10.060 ###### End of recommended dictionary. ###### 00:07:10.061 Done 57 runs in 2 second(s) 00:07:10.061 [2024-05-15 10:59:07.140517] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:10.061 10:59:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:10.061 [2024-05-15 10:59:07.308203] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:10.061 [2024-05-15 10:59:07.308281] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410124 ] 00:07:10.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.319 [2024-05-15 10:59:07.557741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.576 [2024-05-15 10:59:07.647871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.576 [2024-05-15 10:59:07.707399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.576 [2024-05-15 10:59:07.723370] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:10.576 [2024-05-15 10:59:07.723807] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:10.576 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.576 INFO: Seed: 765186706 00:07:10.576 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:07:10.576 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:07:10.576 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:10.576 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.576 #2 INITED exec/s: 0 rss: 64Mb 00:07:10.576 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:10.576 This may also happen if the target rejected all inputs we tried so far 00:07:10.576 [2024-05-15 10:59:07.801183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.576 [2024-05-15 10:59:07.801226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.576 [2024-05-15 10:59:07.801296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.576 [2024-05-15 10:59:07.801314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.577 [2024-05-15 10:59:07.801399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:10.577 [2024-05-15 10:59:07.801419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.577 [2024-05-15 10:59:07.801493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:10.577 [2024-05-15 10:59:07.801514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.142 NEW_FUNC[1/685]: 0x4ac6e0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:11.142 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.142 #6 NEW cov: 11809 ft: 11825 corp: 2/21b lim: 25 exec/s: 0 rss: 70Mb L: 20/20 MS: 4 ChangeByte-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:11.142 [2024-05-15 10:59:08.151000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.142 [2024-05-15 10:59:08.151054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.142 [2024-05-15 10:59:08.151187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.142 [2024-05-15 10:59:08.151216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.142 [2024-05-15 10:59:08.151351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.142 [2024-05-15 10:59:08.151383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.142 [2024-05-15 10:59:08.151521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.142 [2024-05-15 10:59:08.151548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.142 NEW_FUNC[1/1]: 0x1d80210 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:938 00:07:11.142 #7 NEW cov: 11957 ft: 12581 corp: 3/41b lim: 25 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CMP- DE: "\000\205\373Q\370\336q\010"- 00:07:11.142 [2024-05-15 10:59:08.200693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.142 [2024-05-15 10:59:08.200728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.200838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.143 [2024-05-15 10:59:08.200862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.200986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.143 [2024-05-15 10:59:08.201011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.201135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.143 [2024-05-15 10:59:08.201159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.143 #8 NEW cov: 11963 ft: 12990 corp: 4/61b lim: 25 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeByte- 00:07:11.143 [2024-05-15 10:59:08.251162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.143 [2024-05-15 10:59:08.251191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.251315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.143 [2024-05-15 10:59:08.251338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.251458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.143 [2024-05-15 10:59:08.251478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.251601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.143 [2024-05-15 10:59:08.251624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.143 #9 NEW cov: 12048 ft: 13357 corp: 5/82b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 InsertByte- 00:07:11.143 [2024-05-15 10:59:08.301361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.143 [2024-05-15 10:59:08.301397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.301466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.143 [2024-05-15 10:59:08.301487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.301616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.143 [2024-05-15 10:59:08.301639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.301769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.143 [2024-05-15 10:59:08.301793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.143 #10 NEW cov: 12048 ft: 13426 corp: 6/102b lim: 25 exec/s: 0 rss: 71Mb L: 20/21 MS: 1 ChangeBinInt- 00:07:11.143 [2024-05-15 10:59:08.351178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.143 [2024-05-15 10:59:08.351212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.351356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.143 [2024-05-15 10:59:08.351384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.351506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.143 [2024-05-15 10:59:08.351528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.143 #13 NEW cov: 12048 ft: 13862 corp: 7/119b lim: 25 exec/s: 0 rss: 71Mb L: 17/21 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:11.143 [2024-05-15 10:59:08.391056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.143 [2024-05-15 10:59:08.391093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.143 [2024-05-15 10:59:08.391241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.143 [2024-05-15 10:59:08.391265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.401 #14 NEW cov: 12048 ft: 14178 corp: 8/129b lim: 25 exec/s: 0 rss: 71Mb L: 10/21 MS: 1 EraseBytes- 00:07:11.401 [2024-05-15 10:59:08.441788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.401 [2024-05-15 10:59:08.441821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.401 [2024-05-15 10:59:08.441966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.401 [2024-05-15 10:59:08.441985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.401 [2024-05-15 10:59:08.442112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.401 [2024-05-15 10:59:08.442138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.401 [2024-05-15 10:59:08.442266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.401 [2024-05-15 10:59:08.442288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.401 #15 NEW cov: 12048 ft: 14291 corp: 9/149b lim: 25 exec/s: 0 rss: 71Mb L: 20/21 MS: 1 ChangeBit- 00:07:11.401 [2024-05-15 10:59:08.491743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.401 [2024-05-15 10:59:08.491775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.401 [2024-05-15 10:59:08.491834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.401 [2024-05-15 10:59:08.491855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.401 [2024-05-15 10:59:08.491983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.401 [2024-05-15 10:59:08.492008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.401 [2024-05-15 10:59:08.492141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.401 [2024-05-15 10:59:08.492163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.401 #16 NEW cov: 12048 ft: 14350 corp: 10/171b lim: 25 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 CopyPart- 00:07:11.401 [2024-05-15 10:59:08.541701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.402 [2024-05-15 10:59:08.541732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.402 [2024-05-15 10:59:08.541854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.402 [2024-05-15 10:59:08.541874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.402 #17 NEW cov: 12048 ft: 14425 corp: 11/182b lim: 25 exec/s: 0 rss: 71Mb L: 11/22 MS: 1 InsertByte- 00:07:11.402 [2024-05-15 10:59:08.591799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.402 [2024-05-15 10:59:08.591833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.402 [2024-05-15 10:59:08.591904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.402 [2024-05-15 10:59:08.591924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.402 [2024-05-15 10:59:08.592048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.402 [2024-05-15 10:59:08.592071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.402 [2024-05-15 10:59:08.592200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.402 [2024-05-15 10:59:08.592223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.402 #18 NEW cov: 12048 ft: 14481 corp: 12/202b lim: 25 exec/s: 0 rss: 71Mb L: 20/22 MS: 1 CrossOver- 00:07:11.402 [2024-05-15 10:59:08.651977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.402 [2024-05-15 10:59:08.652009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.402 [2024-05-15 10:59:08.652119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.402 [2024-05-15 10:59:08.652138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.660 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:11.660 #19 NEW cov: 12071 ft: 14519 corp: 13/213b lim: 25 exec/s: 0 rss: 71Mb L: 11/22 MS: 1 ChangeBinInt- 00:07:11.660 [2024-05-15 10:59:08.712548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.660 [2024-05-15 10:59:08.712581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.712655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.660 [2024-05-15 10:59:08.712685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.712830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.660 [2024-05-15 10:59:08.712849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.712982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.660 [2024-05-15 10:59:08.713007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.660 #20 NEW cov: 12071 ft: 14577 corp: 14/234b lim: 25 exec/s: 0 rss: 72Mb L: 21/22 MS: 1 ShuffleBytes- 00:07:11.660 [2024-05-15 10:59:08.762785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.660 [2024-05-15 10:59:08.762818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.762887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.660 [2024-05-15 10:59:08.762907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.763025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.660 [2024-05-15 10:59:08.763047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.763171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.660 [2024-05-15 10:59:08.763188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.660 #21 NEW cov: 12071 ft: 14609 corp: 15/255b lim: 25 exec/s: 21 rss: 72Mb L: 21/22 MS: 1 CopyPart- 00:07:11.660 [2024-05-15 10:59:08.802374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.660 [2024-05-15 10:59:08.802410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.802550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.660 [2024-05-15 10:59:08.802577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.660 #22 NEW cov: 12071 ft: 14615 corp: 16/267b lim: 25 exec/s: 22 rss: 72Mb L: 12/22 MS: 1 InsertByte- 00:07:11.660 [2024-05-15 10:59:08.852684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.660 [2024-05-15 10:59:08.852711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.852844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.660 [2024-05-15 10:59:08.852870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.660 #23 NEW cov: 12071 ft: 14622 corp: 17/278b lim: 25 exec/s: 23 rss: 72Mb L: 11/22 MS: 1 CopyPart- 00:07:11.660 [2024-05-15 10:59:08.902815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.660 [2024-05-15 10:59:08.902847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.660 [2024-05-15 10:59:08.902960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.660 [2024-05-15 10:59:08.902983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.918 #24 NEW cov: 12071 ft: 14664 corp: 18/290b lim: 25 exec/s: 24 rss: 72Mb L: 12/22 MS: 1 ChangeBinInt- 00:07:11.918 [2024-05-15 10:59:08.952872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.918 [2024-05-15 10:59:08.952904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:08.952973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.918 [2024-05-15 10:59:08.952996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:08.953112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.918 [2024-05-15 10:59:08.953134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:08.953262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.918 [2024-05-15 10:59:08.953281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.918 #25 NEW cov: 12071 ft: 14679 corp: 19/310b lim: 25 exec/s: 25 rss: 72Mb L: 20/22 MS: 1 ShuffleBytes- 00:07:11.918 [2024-05-15 10:59:08.993487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.918 [2024-05-15 10:59:08.993520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:08.993591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.918 [2024-05-15 10:59:08.993612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:08.993749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.918 [2024-05-15 10:59:08.993771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:08.993901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.918 [2024-05-15 10:59:08.993920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.918 #26 NEW cov: 12071 ft: 14763 corp: 20/333b lim: 25 exec/s: 26 rss: 72Mb L: 23/23 MS: 1 CrossOver- 00:07:11.918 [2024-05-15 10:59:09.043628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.918 [2024-05-15 10:59:09.043657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.043721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.918 [2024-05-15 10:59:09.043738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.043859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.918 [2024-05-15 10:59:09.043878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.044000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.918 [2024-05-15 10:59:09.044021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.918 #27 NEW cov: 12071 ft: 14829 corp: 21/353b lim: 25 exec/s: 27 rss: 72Mb L: 20/23 MS: 1 ShuffleBytes- 00:07:11.918 [2024-05-15 10:59:09.083111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.918 [2024-05-15 10:59:09.083145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.083283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.918 [2024-05-15 10:59:09.083303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.083436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.918 [2024-05-15 10:59:09.083462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.918 #28 NEW cov: 12071 ft: 14846 corp: 22/370b lim: 25 exec/s: 28 rss: 72Mb L: 17/23 MS: 1 InsertRepeatedBytes- 00:07:11.918 [2024-05-15 10:59:09.143889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.918 [2024-05-15 10:59:09.143924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.144015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.918 [2024-05-15 10:59:09.144037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.144166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.918 [2024-05-15 10:59:09.144188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.918 [2024-05-15 10:59:09.144311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.918 [2024-05-15 10:59:09.144336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.918 #29 NEW cov: 12071 ft: 14874 corp: 23/393b lim: 25 exec/s: 29 rss: 72Mb L: 23/23 MS: 1 CMP- DE: "\001\000"- 00:07:12.177 [2024-05-15 10:59:09.203663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.177 [2024-05-15 10:59:09.203690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.203815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.177 [2024-05-15 10:59:09.203838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.177 #30 NEW cov: 12071 ft: 14905 corp: 24/405b lim: 25 exec/s: 30 rss: 72Mb L: 12/23 MS: 1 ShuffleBytes- 00:07:12.177 [2024-05-15 10:59:09.244121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.177 [2024-05-15 10:59:09.244156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.244278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.177 [2024-05-15 10:59:09.244298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.244419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.177 [2024-05-15 10:59:09.244442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.244559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.177 [2024-05-15 10:59:09.244581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.177 #31 NEW cov: 12071 ft: 14920 corp: 25/426b lim: 25 exec/s: 31 rss: 72Mb L: 21/23 MS: 1 ChangeByte- 00:07:12.177 [2024-05-15 10:59:09.293984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.177 [2024-05-15 10:59:09.294018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.294114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.177 [2024-05-15 10:59:09.294135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.177 #32 NEW cov: 12071 ft: 14945 corp: 26/438b lim: 25 exec/s: 32 rss: 72Mb L: 12/23 MS: 1 ChangeBinInt- 00:07:12.177 [2024-05-15 10:59:09.344077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.177 [2024-05-15 10:59:09.344108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.344235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.177 [2024-05-15 10:59:09.344253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.177 #33 NEW cov: 12071 ft: 14989 corp: 27/450b lim: 25 exec/s: 33 rss: 72Mb L: 12/23 MS: 1 ChangeBit- 00:07:12.177 [2024-05-15 10:59:09.404652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.177 [2024-05-15 10:59:09.404686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.404763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.177 [2024-05-15 10:59:09.404785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.404902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.177 [2024-05-15 10:59:09.404920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.177 [2024-05-15 10:59:09.405048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.177 [2024-05-15 10:59:09.405069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.177 #34 NEW cov: 12071 ft: 14999 corp: 28/473b lim: 25 exec/s: 34 rss: 72Mb L: 23/23 MS: 1 ChangeBinInt- 00:07:12.435 [2024-05-15 10:59:09.464552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.435 [2024-05-15 10:59:09.464586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.435 [2024-05-15 10:59:09.464714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.435 [2024-05-15 10:59:09.464733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.435 #35 NEW cov: 12071 ft: 15004 corp: 29/485b lim: 25 exec/s: 35 rss: 73Mb L: 12/23 MS: 1 EraseBytes- 00:07:12.435 [2024-05-15 10:59:09.525038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.435 [2024-05-15 10:59:09.525070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.435 [2024-05-15 10:59:09.525173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.435 [2024-05-15 10:59:09.525195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.435 [2024-05-15 10:59:09.525320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.435 [2024-05-15 10:59:09.525339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.435 [2024-05-15 10:59:09.525481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.435 [2024-05-15 10:59:09.525504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 #36 NEW cov: 12071 ft: 15028 corp: 30/505b lim: 25 exec/s: 36 rss: 73Mb L: 20/23 MS: 1 ShuffleBytes- 00:07:12.436 [2024-05-15 10:59:09.585261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.436 [2024-05-15 10:59:09.585293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.436 [2024-05-15 10:59:09.585362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.436 [2024-05-15 10:59:09.585386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.436 [2024-05-15 10:59:09.585503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.436 [2024-05-15 10:59:09.585524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.436 [2024-05-15 10:59:09.585659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.436 [2024-05-15 10:59:09.585685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 #37 NEW cov: 12071 ft: 15046 corp: 31/526b lim: 25 exec/s: 37 rss: 73Mb L: 21/23 MS: 1 ChangeBinInt- 00:07:12.436 [2024-05-15 10:59:09.645216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.436 [2024-05-15 10:59:09.645246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.436 [2024-05-15 10:59:09.645318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.436 [2024-05-15 10:59:09.645339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.436 [2024-05-15 10:59:09.645470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.436 [2024-05-15 10:59:09.645494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.436 [2024-05-15 10:59:09.645617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.436 [2024-05-15 10:59:09.645638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 #38 NEW cov: 12071 ft: 15072 corp: 32/546b lim: 25 exec/s: 38 rss: 73Mb L: 20/23 MS: 1 ChangeBinInt- 00:07:12.694 [2024-05-15 10:59:09.705177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.694 [2024-05-15 10:59:09.705209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.694 [2024-05-15 10:59:09.705347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.694 [2024-05-15 10:59:09.705370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.694 #39 NEW cov: 12071 ft: 15086 corp: 33/556b lim: 25 exec/s: 39 rss: 73Mb L: 10/23 MS: 1 ShuffleBytes- 00:07:12.694 [2024-05-15 10:59:09.755677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.694 [2024-05-15 10:59:09.755709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.694 [2024-05-15 10:59:09.755803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.694 [2024-05-15 10:59:09.755826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.694 [2024-05-15 10:59:09.755950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.694 [2024-05-15 10:59:09.755975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.694 [2024-05-15 10:59:09.756104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.694 [2024-05-15 10:59:09.756127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.694 #40 NEW cov: 12071 ft: 15091 corp: 34/579b lim: 25 exec/s: 20 rss: 73Mb L: 23/23 MS: 1 CMP- DE: "\013\000\000\000\000\000\000\000"- 00:07:12.695 #40 DONE cov: 12071 ft: 15091 corp: 34/579b lim: 25 exec/s: 20 rss: 73Mb 00:07:12.695 ###### Recommended dictionary. ###### 00:07:12.695 "\000\205\373Q\370\336q\010" # Uses: 0 00:07:12.695 "\001\000" # Uses: 0 00:07:12.695 "\013\000\000\000\000\000\000\000" # Uses: 0 00:07:12.695 ###### End of recommended dictionary. ###### 00:07:12.695 Done 40 runs in 2 second(s) 00:07:12.695 [2024-05-15 10:59:09.787144] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.695 10:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:12.695 [2024-05-15 10:59:09.955564] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:12.695 [2024-05-15 10:59:09.955626] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410661 ] 00:07:12.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.953 [2024-05-15 10:59:10.207893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.211 [2024-05-15 10:59:10.302732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.211 [2024-05-15 10:59:10.362597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.211 [2024-05-15 10:59:10.378545] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.211 [2024-05-15 10:59:10.379000] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:13.211 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.211 INFO: Seed: 3420182239 00:07:13.211 INFO: Loaded 1 modules (352952 inline 8-bit counters): 352952 [0x291fc8c, 0x2975f44), 00:07:13.211 INFO: Loaded 1 PC tables (352952 PCs): 352952 [0x2975f48,0x2ed8ac8), 00:07:13.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:13.212 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.212 #2 INITED exec/s: 0 rss: 64Mb 00:07:13.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.212 This may also happen if the target rejected all inputs we tried so far 00:07:13.212 [2024-05-15 10:59:10.434276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.212 [2024-05-15 10:59:10.434309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.212 [2024-05-15 10:59:10.434368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.212 [2024-05-15 10:59:10.434388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.779 NEW_FUNC[1/685]: 0x4ad7c0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:13.779 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:13.779 #33 NEW cov: 11887 ft: 11888 corp: 2/54b lim: 100 exec/s: 0 rss: 70Mb L: 53/53 MS: 1 InsertRepeatedBytes- 00:07:13.779 [2024-05-15 10:59:10.754971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.755010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.779 [2024-05-15 10:59:10.755068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.755085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.779 NEW_FUNC[1/2]: 0xfa75e0 in posix_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1447 00:07:13.779 NEW_FUNC[2/2]: 0x1a49090 in spdk_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:522 00:07:13.779 #34 NEW cov: 12029 ft: 12517 corp: 3/107b lim: 100 exec/s: 0 rss: 70Mb L: 53/53 MS: 1 ChangeBinInt- 00:07:13.779 [2024-05-15 10:59:10.805022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.805053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.779 [2024-05-15 10:59:10.805109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.805124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.779 #40 NEW cov: 12035 ft: 12867 corp: 4/161b lim: 100 exec/s: 0 rss: 70Mb L: 54/54 MS: 1 CrossOver- 00:07:13.779 [2024-05-15 10:59:10.844981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.845010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.779 #41 NEW cov: 12120 ft: 13875 corp: 5/186b lim: 100 exec/s: 0 rss: 70Mb L: 25/54 MS: 1 CrossOver- 00:07:13.779 [2024-05-15 10:59:10.895305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.895334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.779 [2024-05-15 10:59:10.895384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.895400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.779 #42 NEW cov: 12120 ft: 13933 corp: 6/239b lim: 100 exec/s: 0 rss: 71Mb L: 53/54 MS: 1 ChangeByte- 00:07:13.779 [2024-05-15 10:59:10.935383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.935412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.779 [2024-05-15 10:59:10.935457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.935472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.779 #43 NEW cov: 12120 ft: 14079 corp: 7/287b lim: 100 exec/s: 0 rss: 71Mb L: 48/54 MS: 1 EraseBytes- 00:07:13.779 [2024-05-15 10:59:10.985528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.985557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.779 [2024-05-15 10:59:10.985591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:10.985609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.779 #44 NEW cov: 12120 ft: 14124 corp: 8/340b lim: 100 exec/s: 0 rss: 71Mb L: 53/54 MS: 1 ChangeBinInt- 00:07:13.779 [2024-05-15 10:59:11.025527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1009317314560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.779 [2024-05-15 10:59:11.025556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.038 #45 NEW cov: 12120 ft: 14194 corp: 9/366b lim: 100 exec/s: 0 rss: 71Mb L: 26/54 MS: 1 InsertByte- 00:07:14.038 [2024-05-15 10:59:11.075775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.075805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.038 [2024-05-15 10:59:11.075835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.075850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.038 #46 NEW cov: 12120 ft: 14242 corp: 10/418b lim: 100 exec/s: 0 rss: 71Mb L: 52/54 MS: 1 CrossOver- 00:07:14.038 [2024-05-15 10:59:11.125752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11817445422220181504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.125781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.038 #50 NEW cov: 12120 ft: 14263 corp: 11/443b lim: 100 exec/s: 0 rss: 71Mb L: 25/54 MS: 4 EraseBytes-ChangeBit-ChangeByte-CrossOver- 00:07:14.038 [2024-05-15 10:59:11.176050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.176078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.038 [2024-05-15 10:59:11.176117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.176132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.038 #51 NEW cov: 12120 ft: 14282 corp: 12/496b lim: 100 exec/s: 0 rss: 71Mb L: 53/54 MS: 1 ShuffleBytes- 00:07:14.038 [2024-05-15 10:59:11.216402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.216429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.038 [2024-05-15 10:59:11.216475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.216491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.038 [2024-05-15 10:59:11.216541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.216557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.038 [2024-05-15 10:59:11.216611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.216627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.038 #52 NEW cov: 12120 ft: 14700 corp: 13/586b lim: 100 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:14.038 [2024-05-15 10:59:11.266284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.266311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.038 [2024-05-15 10:59:11.266340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:71213169107795968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.038 [2024-05-15 10:59:11.266355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.038 #53 NEW cov: 12120 ft: 14742 corp: 14/639b lim: 100 exec/s: 0 rss: 71Mb L: 53/90 MS: 1 ShuffleBytes- 00:07:14.296 [2024-05-15 10:59:11.316703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.296 [2024-05-15 10:59:11.316731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.296 [2024-05-15 10:59:11.316771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.296 [2024-05-15 10:59:11.316787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.296 [2024-05-15 10:59:11.316839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.316854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.297 [2024-05-15 10:59:11.316907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:844424930131968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.316922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.297 NEW_FUNC[1/1]: 0x1a1bd80 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:14.297 #54 NEW cov: 12143 ft: 14820 corp: 15/729b lim: 100 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CMP- DE: "\003\000\000\000"- 00:07:14.297 [2024-05-15 10:59:11.366564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.366592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.297 [2024-05-15 10:59:11.366621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.366636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.297 #55 NEW cov: 12143 ft: 14825 corp: 16/782b lim: 100 exec/s: 0 rss: 72Mb L: 53/90 MS: 1 CrossOver- 00:07:14.297 [2024-05-15 10:59:11.416698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.416724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.297 [2024-05-15 10:59:11.416763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.416778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.297 #56 NEW cov: 12143 ft: 14844 corp: 17/830b lim: 100 exec/s: 56 rss: 72Mb L: 48/90 MS: 1 ChangeBit- 00:07:14.297 [2024-05-15 10:59:11.466739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11817445422220181504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.466769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.297 #57 NEW cov: 12143 ft: 14872 corp: 18/855b lim: 100 exec/s: 57 rss: 72Mb L: 25/90 MS: 1 CrossOver- 00:07:14.297 [2024-05-15 10:59:11.516997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.517024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.297 [2024-05-15 10:59:11.517055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.517070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.297 #58 NEW cov: 12143 ft: 14901 corp: 19/907b lim: 100 exec/s: 58 rss: 72Mb L: 52/90 MS: 1 ShuffleBytes- 00:07:14.297 [2024-05-15 10:59:11.557078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:48378511622144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.557106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.297 [2024-05-15 10:59:11.557136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:71213169107795968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.297 [2024-05-15 10:59:11.557152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.555 #69 NEW cov: 12143 ft: 14953 corp: 20/960b lim: 100 exec/s: 69 rss: 72Mb L: 53/90 MS: 1 ChangeByte- 00:07:14.555 [2024-05-15 10:59:11.607233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.555 [2024-05-15 10:59:11.607260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.555 [2024-05-15 10:59:11.607289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.555 [2024-05-15 10:59:11.607305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.555 #70 NEW cov: 12143 ft: 14962 corp: 21/1013b lim: 100 exec/s: 70 rss: 72Mb L: 53/90 MS: 1 ChangeByte- 00:07:14.555 [2024-05-15 10:59:11.647194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.555 [2024-05-15 10:59:11.647222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.555 #71 NEW cov: 12143 ft: 14977 corp: 22/1048b lim: 100 exec/s: 71 rss: 72Mb L: 35/90 MS: 1 EraseBytes- 00:07:14.556 [2024-05-15 10:59:11.687289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.687317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.556 #72 NEW cov: 12143 ft: 14988 corp: 23/1084b lim: 100 exec/s: 72 rss: 72Mb L: 36/90 MS: 1 EraseBytes- 00:07:14.556 [2024-05-15 10:59:11.727594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.727620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.556 [2024-05-15 10:59:11.727662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.727678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.556 #73 NEW cov: 12143 ft: 15046 corp: 24/1137b lim: 100 exec/s: 73 rss: 72Mb L: 53/90 MS: 1 ShuffleBytes- 00:07:14.556 [2024-05-15 10:59:11.777727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.777755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.556 [2024-05-15 10:59:11.777803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.777818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.556 #74 NEW cov: 12143 ft: 15057 corp: 25/1190b lim: 100 exec/s: 74 rss: 72Mb L: 53/90 MS: 1 ChangeBinInt- 00:07:14.556 [2024-05-15 10:59:11.817829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.817859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.556 [2024-05-15 10:59:11.817915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:71213169107795968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.556 [2024-05-15 10:59:11.817931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.814 #75 NEW cov: 12143 ft: 15070 corp: 26/1243b lim: 100 exec/s: 75 rss: 72Mb L: 53/90 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:07:14.814 [2024-05-15 10:59:11.858181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.814 [2024-05-15 10:59:11.858209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.814 [2024-05-15 10:59:11.858249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.814 [2024-05-15 10:59:11.858265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.814 [2024-05-15 10:59:11.858319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.814 [2024-05-15 10:59:11.858335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.814 [2024-05-15 10:59:11.858389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.814 [2024-05-15 10:59:11.858404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.814 #76 NEW cov: 12143 ft: 15102 corp: 27/1333b lim: 100 exec/s: 76 rss: 72Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:14.814 [2024-05-15 10:59:11.898036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.898064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:11.898109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:71249452991512576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.898124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.815 #77 NEW cov: 12143 ft: 15105 corp: 28/1386b lim: 100 exec/s: 77 rss: 73Mb L: 53/90 MS: 1 ChangeByte- 00:07:14.815 [2024-05-15 10:59:11.948193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.948224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:11.948271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.948286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.815 #78 NEW cov: 12143 ft: 15115 corp: 29/1434b lim: 100 exec/s: 78 rss: 73Mb L: 48/90 MS: 1 ChangeByte- 00:07:14.815 [2024-05-15 10:59:11.998751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.998778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:11.998823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.998840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:11.998893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308569812991 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.998909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:11.998961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17221764979090255855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.998976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:11.999028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:11.999043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.815 #79 NEW cov: 12143 ft: 15215 corp: 30/1534b lim: 100 exec/s: 79 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:07:14.815 [2024-05-15 10:59:12.038720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:12.038748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:12.038792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:12.038807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:12.038860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:12.038876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.815 [2024-05-15 10:59:12.038930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.815 [2024-05-15 10:59:12.038944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.815 #80 NEW cov: 12143 ft: 15241 corp: 31/1630b lim: 100 exec/s: 80 rss: 73Mb L: 96/100 MS: 1 InsertRepeatedBytes- 00:07:15.073 [2024-05-15 10:59:12.088647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.088679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.073 [2024-05-15 10:59:12.088728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.088744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.073 [2024-05-15 10:59:12.129108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.129137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.073 [2024-05-15 10:59:12.129179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.129197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.073 [2024-05-15 10:59:12.129250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.129266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.073 [2024-05-15 10:59:12.129319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.129335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.073 [2024-05-15 10:59:12.129392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.073 [2024-05-15 10:59:12.129408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:15.073 #82 NEW cov: 12143 ft: 15248 corp: 32/1730b lim: 100 exec/s: 82 rss: 73Mb L: 100/100 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:15.073 [2024-05-15 10:59:12.168702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:134217728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.074 [2024-05-15 10:59:12.168732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.074 #83 NEW cov: 12143 ft: 15281 corp: 33/1754b lim: 100 exec/s: 83 rss: 73Mb L: 24/100 MS: 1 EraseBytes- 00:07:15.074 [2024-05-15 10:59:12.208927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.074 [2024-05-15 10:59:12.208954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.074 [2024-05-15 10:59:12.209004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.074 [2024-05-15 10:59:12.209019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.074 #84 NEW cov: 12143 ft: 15287 corp: 34/1807b lim: 100 exec/s: 84 rss: 73Mb L: 53/100 MS: 1 ChangeBinInt- 00:07:15.074 [2024-05-15 10:59:12.259076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.074 [2024-05-15 10:59:12.259105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.074 [2024-05-15 10:59:12.259146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1086626725921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.074 [2024-05-15 10:59:12.259164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.074 #85 NEW cov: 12143 ft: 15290 corp: 35/1861b lim: 100 exec/s: 85 rss: 73Mb L: 54/100 MS: 1 InsertByte- 00:07:15.074 [2024-05-15 10:59:12.299078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.074 [2024-05-15 10:59:12.299106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.074 #86 NEW cov: 12143 ft: 15331 corp: 36/1897b lim: 100 exec/s: 86 rss: 73Mb L: 36/100 MS: 1 ChangeByte- 00:07:15.333 [2024-05-15 10:59:12.349328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:196608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.333 [2024-05-15 10:59:12.349357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.333 [2024-05-15 10:59:12.349400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:278176441827328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.333 [2024-05-15 10:59:12.349416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.333 #87 NEW cov: 12143 ft: 15335 corp: 37/1950b lim: 100 exec/s: 87 rss: 73Mb L: 53/100 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:07:15.333 [2024-05-15 10:59:12.399472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:71777218572845056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.333 [2024-05-15 10:59:12.399501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.333 [2024-05-15 10:59:12.399530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:64769 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.333 [2024-05-15 10:59:12.399546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.333 #88 NEW cov: 12143 ft: 15337 corp: 38/2007b lim: 100 exec/s: 44 rss: 73Mb L: 57/100 MS: 1 CMP- DE: "\377\001\000\000"- 00:07:15.333 #88 DONE cov: 12143 ft: 15337 corp: 38/2007b lim: 100 exec/s: 44 rss: 73Mb 00:07:15.333 ###### Recommended dictionary. ###### 00:07:15.333 "\003\000\000\000" # Uses: 2 00:07:15.333 "\377\001\000\000" # Uses: 0 00:07:15.333 ###### End of recommended dictionary. ###### 00:07:15.333 Done 88 runs in 2 second(s) 00:07:15.333 [2024-05-15 10:59:12.419021] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:15.333 10:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:15.333 10:59:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.333 10:59:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.333 10:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:15.333 00:07:15.333 real 1m6.547s 00:07:15.333 user 1m40.827s 00:07:15.333 sys 0m9.024s 00:07:15.333 10:59:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:15.333 10:59:12 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.333 ************************************ 00:07:15.333 END TEST nvmf_fuzz 00:07:15.333 ************************************ 00:07:15.333 10:59:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.333 10:59:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.333 10:59:12 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:15.333 10:59:12 llvm_fuzz -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:15.333 10:59:12 llvm_fuzz -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:15.333 10:59:12 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.594 ************************************ 00:07:15.594 START TEST vfio_fuzz 00:07:15.594 ************************************ 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:15.594 * Looking for test storage... 00:07:15.594 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:15.594 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:15.595 #define SPDK_CONFIG_H 00:07:15.595 #define SPDK_CONFIG_APPS 1 00:07:15.595 #define SPDK_CONFIG_ARCH native 00:07:15.595 #undef SPDK_CONFIG_ASAN 00:07:15.595 #undef SPDK_CONFIG_AVAHI 00:07:15.595 #undef SPDK_CONFIG_CET 00:07:15.595 #define SPDK_CONFIG_COVERAGE 1 00:07:15.595 #define SPDK_CONFIG_CROSS_PREFIX 00:07:15.595 #undef SPDK_CONFIG_CRYPTO 00:07:15.595 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:15.595 #undef SPDK_CONFIG_CUSTOMOCF 00:07:15.595 #undef SPDK_CONFIG_DAOS 00:07:15.595 #define SPDK_CONFIG_DAOS_DIR 00:07:15.595 #define SPDK_CONFIG_DEBUG 1 00:07:15.595 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:15.595 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:15.595 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:15.595 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:15.595 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:15.595 #undef SPDK_CONFIG_DPDK_UADK 00:07:15.595 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:15.595 #define SPDK_CONFIG_EXAMPLES 1 00:07:15.595 #undef SPDK_CONFIG_FC 00:07:15.595 #define SPDK_CONFIG_FC_PATH 00:07:15.595 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:15.595 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:15.595 #undef SPDK_CONFIG_FUSE 00:07:15.595 #define SPDK_CONFIG_FUZZER 1 00:07:15.595 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:15.595 #undef SPDK_CONFIG_GOLANG 00:07:15.595 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:15.595 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:15.595 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:15.595 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:15.595 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:15.595 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:15.595 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:15.595 #define SPDK_CONFIG_IDXD 1 00:07:15.595 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:15.595 #undef SPDK_CONFIG_IPSEC_MB 00:07:15.595 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:15.595 #define SPDK_CONFIG_ISAL 1 00:07:15.595 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:15.595 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:15.595 #define SPDK_CONFIG_LIBDIR 00:07:15.595 #undef SPDK_CONFIG_LTO 00:07:15.595 #define SPDK_CONFIG_MAX_LCORES 00:07:15.595 #define SPDK_CONFIG_NVME_CUSE 1 00:07:15.595 #undef SPDK_CONFIG_OCF 00:07:15.595 #define SPDK_CONFIG_OCF_PATH 00:07:15.595 #define SPDK_CONFIG_OPENSSL_PATH 00:07:15.595 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:15.595 #define SPDK_CONFIG_PGO_DIR 00:07:15.595 #undef SPDK_CONFIG_PGO_USE 00:07:15.595 #define SPDK_CONFIG_PREFIX /usr/local 00:07:15.595 #undef SPDK_CONFIG_RAID5F 00:07:15.595 #undef SPDK_CONFIG_RBD 00:07:15.595 #define SPDK_CONFIG_RDMA 1 00:07:15.595 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:15.595 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:15.595 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:15.595 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:15.595 #undef SPDK_CONFIG_SHARED 00:07:15.595 #undef SPDK_CONFIG_SMA 00:07:15.595 #define SPDK_CONFIG_TESTS 1 00:07:15.595 #undef SPDK_CONFIG_TSAN 00:07:15.595 #define SPDK_CONFIG_UBLK 1 00:07:15.595 #define SPDK_CONFIG_UBSAN 1 00:07:15.595 #undef SPDK_CONFIG_UNIT_TESTS 00:07:15.595 #undef SPDK_CONFIG_URING 00:07:15.595 #define SPDK_CONFIG_URING_PATH 00:07:15.595 #undef SPDK_CONFIG_URING_ZNS 00:07:15.595 #undef SPDK_CONFIG_USDT 00:07:15.595 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:15.595 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:15.595 #define SPDK_CONFIG_VFIO_USER 1 00:07:15.595 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:15.595 #define SPDK_CONFIG_VHOST 1 00:07:15.595 #define SPDK_CONFIG_VIRTIO 1 00:07:15.595 #undef SPDK_CONFIG_VTUNE 00:07:15.595 #define SPDK_CONFIG_VTUNE_DIR 00:07:15.595 #define SPDK_CONFIG_WERROR 1 00:07:15.595 #define SPDK_CONFIG_WPDK_DIR 00:07:15.595 #undef SPDK_CONFIG_XNVME 00:07:15.595 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:15.595 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # : 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # : 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # : 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # : 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # : 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.596 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # [[ -z 1411223 ]] 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # kill -0 1411223 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Pzg04V 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.Pzg04V/tests/vfio /tmp/spdk.Pzg04V 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=968024064 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4316405760 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=52283371520 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=9458933760 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866440192 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342489088 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:07:15.597 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5971968 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869565440 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1589248 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:15.598 * Looking for test storage... 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # target_space=52283371520 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # new_size=11673526272 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:15.598 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # set -o errtrace 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1684 -- # true 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1686 -- # xtrace_fd 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:15.598 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:15.857 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:15.857 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:15.857 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:15.857 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:15.857 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:15.857 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:15.857 10:59:12 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:15.857 [2024-05-15 10:59:12.892565] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:15.857 [2024-05-15 10:59:12.892628] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411268 ] 00:07:15.857 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.857 [2024-05-15 10:59:12.964209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.857 [2024-05-15 10:59:13.035162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.115 [2024-05-15 10:59:13.205737] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:16.115 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.115 INFO: Seed: 1951222597 00:07:16.115 INFO: Loaded 1 modules (350188 inline 8-bit counters): 350188 [0x28e048c, 0x2935c78), 00:07:16.115 INFO: Loaded 1 PC tables (350188 PCs): 350188 [0x2935c78,0x2e8db38), 00:07:16.115 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:16.115 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.115 #2 INITED exec/s: 0 rss: 64Mb 00:07:16.115 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.115 This may also happen if the target rejected all inputs we tried so far 00:07:16.115 [2024-05-15 10:59:13.275984] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:16.631 NEW_FUNC[1/646]: 0x481740 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:16.631 NEW_FUNC[2/646]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:16.631 #10 NEW cov: 10924 ft: 10500 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:16.889 #11 NEW cov: 10938 ft: 14253 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:16.889 NEW_FUNC[1/2]: 0x117d680 in nvmf_prop_get_asq /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1382 00:07:16.889 NEW_FUNC[2/2]: 0x19e82b0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:16.889 #14 NEW cov: 10967 ft: 15317 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:07:17.147 #24 NEW cov: 10974 ft: 15442 corp: 5/25b lim: 6 exec/s: 24 rss: 73Mb L: 6/6 MS: 5 CrossOver-CopyPart-CopyPart-ChangeByte-InsertByte- 00:07:17.405 #25 NEW cov: 10974 ft: 15840 corp: 6/31b lim: 6 exec/s: 25 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:17.664 #26 NEW cov: 10974 ft: 16130 corp: 7/37b lim: 6 exec/s: 26 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:17.922 #32 NEW cov: 10974 ft: 16226 corp: 8/43b lim: 6 exec/s: 32 rss: 73Mb L: 6/6 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:17.922 #33 NEW cov: 10981 ft: 16355 corp: 9/49b lim: 6 exec/s: 33 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:18.179 #34 NEW cov: 10981 ft: 16580 corp: 10/55b lim: 6 exec/s: 17 rss: 74Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:07:18.180 #34 DONE cov: 10981 ft: 16580 corp: 10/55b lim: 6 exec/s: 17 rss: 74Mb 00:07:18.180 ###### Recommended dictionary. ###### 00:07:18.180 "\002\000\000\000" # Uses: 0 00:07:18.180 ###### End of recommended dictionary. ###### 00:07:18.180 Done 34 runs in 2 second(s) 00:07:18.180 [2024-05-15 10:59:15.405579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:18.438 [2024-05-15 10:59:15.460113] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:18.438 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:18.438 10:59:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:18.438 [2024-05-15 10:59:15.698333] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:18.438 [2024-05-15 10:59:15.698435] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411757 ] 00:07:18.696 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.696 [2024-05-15 10:59:15.773594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.696 [2024-05-15 10:59:15.846532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.954 [2024-05-15 10:59:16.021878] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:18.954 INFO: Running with entropic power schedule (0xFF, 100). 00:07:18.954 INFO: Seed: 474271605 00:07:18.954 INFO: Loaded 1 modules (350188 inline 8-bit counters): 350188 [0x28e048c, 0x2935c78), 00:07:18.954 INFO: Loaded 1 PC tables (350188 PCs): 350188 [0x2935c78,0x2e8db38), 00:07:18.954 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:18.954 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.954 #2 INITED exec/s: 0 rss: 64Mb 00:07:18.954 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.954 This may also happen if the target rejected all inputs we tried so far 00:07:18.954 [2024-05-15 10:59:16.093611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:18.954 [2024-05-15 10:59:16.121410] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:18.954 [2024-05-15 10:59:16.121435] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:18.954 [2024-05-15 10:59:16.121453] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.470 NEW_FUNC[1/648]: 0x481ce0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:19.470 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:19.470 #15 NEW cov: 10917 ft: 10834 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 3 ChangeBit-InsertByte-CopyPart- 00:07:19.470 [2024-05-15 10:59:16.541463] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.470 [2024-05-15 10:59:16.541494] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.470 [2024-05-15 10:59:16.541513] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.470 #16 NEW cov: 10931 ft: 13521 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:07:19.470 [2024-05-15 10:59:16.655317] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.470 [2024-05-15 10:59:16.655345] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.470 [2024-05-15 10:59:16.655364] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.470 #17 NEW cov: 10931 ft: 14949 corp: 4/13b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:19.727 [2024-05-15 10:59:16.768208] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.727 [2024-05-15 10:59:16.768235] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.727 [2024-05-15 10:59:16.768254] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.728 #18 NEW cov: 10931 ft: 15638 corp: 5/17b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:19.728 [2024-05-15 10:59:16.893232] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.728 [2024-05-15 10:59:16.893260] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.728 [2024-05-15 10:59:16.893279] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.728 NEW_FUNC[1/1]: 0x19e82b0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.728 #20 NEW cov: 10948 ft: 15978 corp: 6/21b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 2 EraseBytes-InsertByte- 00:07:19.986 [2024-05-15 10:59:17.008131] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.986 [2024-05-15 10:59:17.008158] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.986 [2024-05-15 10:59:17.008177] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.986 #26 NEW cov: 10948 ft: 16197 corp: 7/25b lim: 4 exec/s: 26 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:19.986 [2024-05-15 10:59:17.123112] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.986 [2024-05-15 10:59:17.123138] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.986 [2024-05-15 10:59:17.123157] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.986 #32 NEW cov: 10948 ft: 16286 corp: 8/29b lim: 4 exec/s: 32 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:07:19.986 [2024-05-15 10:59:17.237199] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.986 [2024-05-15 10:59:17.237230] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.986 [2024-05-15 10:59:17.237250] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.244 #38 NEW cov: 10948 ft: 16344 corp: 9/33b lim: 4 exec/s: 38 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:20.244 [2024-05-15 10:59:17.350168] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.244 [2024-05-15 10:59:17.350193] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.244 [2024-05-15 10:59:17.350211] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.244 #39 NEW cov: 10948 ft: 16418 corp: 10/37b lim: 4 exec/s: 39 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:20.244 [2024-05-15 10:59:17.462084] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.244 [2024-05-15 10:59:17.462109] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.244 [2024-05-15 10:59:17.462128] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.502 #40 NEW cov: 10948 ft: 16446 corp: 11/41b lim: 4 exec/s: 40 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:20.502 [2024-05-15 10:59:17.576004] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.502 [2024-05-15 10:59:17.576031] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.502 [2024-05-15 10:59:17.576050] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.502 #41 NEW cov: 10948 ft: 16470 corp: 12/45b lim: 4 exec/s: 41 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:20.502 [2024-05-15 10:59:17.688909] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.502 [2024-05-15 10:59:17.688935] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.502 [2024-05-15 10:59:17.688954] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.502 #42 NEW cov: 10948 ft: 16652 corp: 13/49b lim: 4 exec/s: 42 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:20.760 [2024-05-15 10:59:17.801840] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.760 [2024-05-15 10:59:17.801866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.760 [2024-05-15 10:59:17.801885] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.760 #43 NEW cov: 10955 ft: 16757 corp: 14/53b lim: 4 exec/s: 43 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:20.760 [2024-05-15 10:59:17.914650] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.760 [2024-05-15 10:59:17.914676] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.760 [2024-05-15 10:59:17.914694] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.760 #44 NEW cov: 10955 ft: 16830 corp: 15/57b lim: 4 exec/s: 44 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:21.018 [2024-05-15 10:59:18.029680] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:21.018 [2024-05-15 10:59:18.029707] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:21.019 [2024-05-15 10:59:18.029726] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:21.019 #45 NEW cov: 10955 ft: 16854 corp: 16/61b lim: 4 exec/s: 22 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:21.019 #45 DONE cov: 10955 ft: 16854 corp: 16/61b lim: 4 exec/s: 22 rss: 74Mb 00:07:21.019 Done 45 runs in 2 second(s) 00:07:21.019 [2024-05-15 10:59:18.117574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:21.019 [2024-05-15 10:59:18.168319] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.277 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:21.277 10:59:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.277 10:59:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.277 10:59:18 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:21.277 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:21.277 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:21.278 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.278 10:59:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:21.278 [2024-05-15 10:59:18.401662] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:21.278 [2024-05-15 10:59:18.401739] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412113 ] 00:07:21.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.278 [2024-05-15 10:59:18.473858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.536 [2024-05-15 10:59:18.547239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.536 [2024-05-15 10:59:18.716136] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:21.536 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.536 INFO: Seed: 3166255651 00:07:21.536 INFO: Loaded 1 modules (350188 inline 8-bit counters): 350188 [0x28e048c, 0x2935c78), 00:07:21.536 INFO: Loaded 1 PC tables (350188 PCs): 350188 [0x2935c78,0x2e8db38), 00:07:21.536 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:21.536 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.536 #2 INITED exec/s: 0 rss: 64Mb 00:07:21.536 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.536 This may also happen if the target rejected all inputs we tried so far 00:07:21.536 [2024-05-15 10:59:18.785535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:21.795 [2024-05-15 10:59:18.853518] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:21.795 [2024-05-15 10:59:18.853555] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:22.053 NEW_FUNC[1/646]: 0x4826c0 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:22.053 NEW_FUNC[2/646]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:22.053 #31 NEW cov: 10863 ft: 10877 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 4 ChangeBit-ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:22.053 [2024-05-15 10:59:19.314897] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.348 NEW_FUNC[1/2]: 0x170e430 in nvme_qpair_resubmit_requests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:699 00:07:22.348 NEW_FUNC[2/2]: 0x1d2b630 in spdk_io_channel_get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:2478 00:07:22.348 #32 NEW cov: 10921 ft: 14181 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CopyPart- 00:07:22.348 [2024-05-15 10:59:19.514360] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.631 NEW_FUNC[1/1]: 0x19e82b0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:22.631 #33 NEW cov: 10941 ft: 15573 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:22.631 [2024-05-15 10:59:19.708675] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.631 #34 NEW cov: 10941 ft: 16226 corp: 5/33b lim: 8 exec/s: 34 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:22.631 [2024-05-15 10:59:19.892009] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.889 #35 NEW cov: 10941 ft: 16522 corp: 6/41b lim: 8 exec/s: 35 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:07:22.889 [2024-05-15 10:59:20.071730] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.147 #36 NEW cov: 10941 ft: 17012 corp: 7/49b lim: 8 exec/s: 36 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:23.147 [2024-05-15 10:59:20.260131] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.147 #37 NEW cov: 10941 ft: 17400 corp: 8/57b lim: 8 exec/s: 37 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:23.404 [2024-05-15 10:59:20.454448] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.404 #38 NEW cov: 10948 ft: 17793 corp: 9/65b lim: 8 exec/s: 38 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:23.404 [2024-05-15 10:59:20.642410] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: cmd 5 failed: Invalid argument 00:07:23.404 [2024-05-15 10:59:20.642454] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:23.662 #39 NEW cov: 10948 ft: 17934 corp: 10/73b lim: 8 exec/s: 39 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:23.662 [2024-05-15 10:59:20.829111] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.920 #40 NEW cov: 10948 ft: 17998 corp: 11/81b lim: 8 exec/s: 20 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:23.920 #40 DONE cov: 10948 ft: 17998 corp: 11/81b lim: 8 exec/s: 20 rss: 74Mb 00:07:23.920 Done 40 runs in 2 second(s) 00:07:23.920 [2024-05-15 10:59:20.957576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:23.920 [2024-05-15 10:59:21.007047] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:24.179 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:24.179 10:59:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:24.179 [2024-05-15 10:59:21.241891] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:24.179 [2024-05-15 10:59:21.241961] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412635 ] 00:07:24.179 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.179 [2024-05-15 10:59:21.313348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.179 [2024-05-15 10:59:21.384879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.437 [2024-05-15 10:59:21.550416] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:24.437 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.437 INFO: Seed: 1708298540 00:07:24.437 INFO: Loaded 1 modules (350188 inline 8-bit counters): 350188 [0x28e048c, 0x2935c78), 00:07:24.437 INFO: Loaded 1 PC tables (350188 PCs): 350188 [0x2935c78,0x2e8db38), 00:07:24.437 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:24.437 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.437 #2 INITED exec/s: 0 rss: 63Mb 00:07:24.437 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.437 This may also happen if the target rejected all inputs we tried so far 00:07:24.437 [2024-05-15 10:59:21.629899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:24.953 NEW_FUNC[1/647]: 0x482da0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:24.953 NEW_FUNC[2/647]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:24.953 #29 NEW cov: 10899 ft: 10871 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:25.211 #35 NEW cov: 10919 ft: 13286 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:25.211 NEW_FUNC[1/1]: 0x19e82b0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:25.211 #36 NEW cov: 10936 ft: 14724 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:25.468 #42 NEW cov: 10939 ft: 15078 corp: 5/129b lim: 32 exec/s: 42 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:25.726 #48 NEW cov: 10939 ft: 15201 corp: 6/161b lim: 32 exec/s: 48 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:25.985 #49 NEW cov: 10939 ft: 15496 corp: 7/193b lim: 32 exec/s: 49 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:25.985 #50 NEW cov: 10939 ft: 15939 corp: 8/225b lim: 32 exec/s: 50 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:26.242 #56 NEW cov: 10946 ft: 16280 corp: 9/257b lim: 32 exec/s: 56 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:26.501 #57 NEW cov: 10946 ft: 16294 corp: 10/289b lim: 32 exec/s: 28 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:26.501 #57 DONE cov: 10946 ft: 16294 corp: 10/289b lim: 32 exec/s: 28 rss: 73Mb 00:07:26.501 Done 57 runs in 2 second(s) 00:07:26.501 [2024-05-15 10:59:23.613591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:26.501 [2024-05-15 10:59:23.663454] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:26.757 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:26.757 10:59:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:26.757 [2024-05-15 10:59:23.897854] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:26.757 [2024-05-15 10:59:23.897923] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413170 ] 00:07:26.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.757 [2024-05-15 10:59:23.969625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.014 [2024-05-15 10:59:24.041040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.015 [2024-05-15 10:59:24.212385] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:27.015 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.015 INFO: Seed: 75339603 00:07:27.015 INFO: Loaded 1 modules (350188 inline 8-bit counters): 350188 [0x28e048c, 0x2935c78), 00:07:27.015 INFO: Loaded 1 PC tables (350188 PCs): 350188 [0x2935c78,0x2e8db38), 00:07:27.015 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:27.015 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.015 #2 INITED exec/s: 0 rss: 64Mb 00:07:27.015 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.015 This may also happen if the target rejected all inputs we tried so far 00:07:27.015 [2024-05-15 10:59:24.280098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:27.531 NEW_FUNC[1/636]: 0x483620 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:27.531 NEW_FUNC[2/636]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:27.531 #37 NEW cov: 10782 ft: 10865 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 5 CopyPart-InsertRepeatedBytes-ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:27.789 NEW_FUNC[1/11]: 0x111bec0 in spdk_nvmf_request_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4575 00:07:27.789 NEW_FUNC[2/11]: 0x111c280 in spdk_thread_exec_msg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/thread.h:546 00:07:27.789 #53 NEW cov: 10924 ft: 13375 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:07:28.046 NEW_FUNC[1/1]: 0x19e82b0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:28.046 #59 NEW cov: 10941 ft: 13991 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:28.303 #70 NEW cov: 10941 ft: 15430 corp: 5/129b lim: 32 exec/s: 70 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:28.303 #86 NEW cov: 10941 ft: 15718 corp: 6/161b lim: 32 exec/s: 86 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:28.561 #92 NEW cov: 10941 ft: 15817 corp: 7/193b lim: 32 exec/s: 92 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:28.819 #93 NEW cov: 10941 ft: 16239 corp: 8/225b lim: 32 exec/s: 93 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:28.819 #94 NEW cov: 10948 ft: 16546 corp: 9/257b lim: 32 exec/s: 94 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:29.077 #95 NEW cov: 10948 ft: 16617 corp: 10/289b lim: 32 exec/s: 47 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:29.077 #95 DONE cov: 10948 ft: 16617 corp: 10/289b lim: 32 exec/s: 47 rss: 74Mb 00:07:29.077 Done 95 runs in 2 second(s) 00:07:29.077 [2024-05-15 10:59:26.291598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:29.077 [2024-05-15 10:59:26.341299] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:29.335 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:29.335 10:59:26 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:29.335 [2024-05-15 10:59:26.576741] Starting SPDK v24.05-pre git sha1 01f10b8a3 / DPDK 23.11.0 initialization... 00:07:29.336 [2024-05-15 10:59:26.576810] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413682 ] 00:07:29.594 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.594 [2024-05-15 10:59:26.649771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.594 [2024-05-15 10:59:26.721957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.852 [2024-05-15 10:59:26.887961] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:29.852 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.852 INFO: Seed: 2749318751 00:07:29.852 INFO: Loaded 1 modules (350188 inline 8-bit counters): 350188 [0x28e048c, 0x2935c78), 00:07:29.852 INFO: Loaded 1 PC tables (350188 PCs): 350188 [0x2935c78,0x2e8db38), 00:07:29.852 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:29.852 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.852 #2 INITED exec/s: 0 rss: 64Mb 00:07:29.852 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.852 This may also happen if the target rejected all inputs we tried so far 00:07:29.852 [2024-05-15 10:59:26.957010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:29.852 [2024-05-15 10:59:27.011418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:29.852 [2024-05-15 10:59:27.011463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.368 NEW_FUNC[1/648]: 0x484020 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:30.368 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.368 #30 NEW cov: 10910 ft: 10882 corp: 2/14b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:30.368 [2024-05-15 10:59:27.487141] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.368 [2024-05-15 10:59:27.487183] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.368 #34 NEW cov: 10933 ft: 13355 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 4 InsertRepeatedBytes-CopyPart-CrossOver-InsertByte- 00:07:30.625 [2024-05-15 10:59:27.683806] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.625 [2024-05-15 10:59:27.683837] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.625 NEW_FUNC[1/1]: 0x19e82b0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:30.625 #40 NEW cov: 10950 ft: 13968 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:30.625 [2024-05-15 10:59:27.868178] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.625 [2024-05-15 10:59:27.868208] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.883 #46 NEW cov: 10950 ft: 15071 corp: 5/53b lim: 13 exec/s: 46 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:07:30.883 [2024-05-15 10:59:28.053247] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.883 [2024-05-15 10:59:28.053278] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.140 #47 NEW cov: 10950 ft: 15479 corp: 6/66b lim: 13 exec/s: 47 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:31.140 [2024-05-15 10:59:28.235456] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.140 [2024-05-15 10:59:28.235486] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.140 #48 NEW cov: 10950 ft: 15791 corp: 7/79b lim: 13 exec/s: 48 rss: 73Mb L: 13/13 MS: 1 CMP- DE: "\001\000\177\375:\365 ioatdma 00:07:44.889 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:44.889 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:44.889 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:44.889 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:44.889 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:44.889 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:45.148 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:45.148 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:45.148 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:45.408 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:45.408 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:45.408 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:45.667 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:45.667 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:45.667 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:45.925 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:49.211 Cleaning 00:07:49.211 Removing: /dev/shm/spdk_tgt_trace.pid1378016 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1375553 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1376813 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1378016 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1378719 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1379565 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1379839 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1380944 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1380976 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1381369 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1381691 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1382011 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1382349 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1382677 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1382969 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1383249 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1383555 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1384417 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1388137 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1388435 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1388739 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1388999 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1389571 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1389617 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1390153 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1390402 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1390711 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1390735 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1391025 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1391197 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1391661 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1391946 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1392234 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1392328 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1392611 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1392801 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1392947 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1393230 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1393517 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1393797 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1394083 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1394325 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1394538 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1394767 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1394993 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1395261 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1395550 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1395832 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1396125 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1396404 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1396685 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1396970 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1397256 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1397484 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1397718 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1397949 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1398195 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1398468 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1398817 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1399391 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1399824 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1400363 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1400900 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1401244 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1401719 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1402259 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1402701 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1403081 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1403621 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1404150 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1404502 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1404985 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1405512 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1406027 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1406354 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1406873 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1407408 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1407829 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1408234 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1408765 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1409302 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1409630 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1410124 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1410661 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1411268 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1411757 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1412113 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1412635 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1413170 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1413682 00:07:49.211 Removing: /var/run/dpdk/spdk_pid1414079 00:07:49.211 Clean 00:07:49.211 10:59:46 -- common/autotest_common.sh@1448 -- # return 0 00:07:49.211 10:59:46 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:07:49.211 10:59:46 -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:49.211 10:59:46 -- common/autotest_common.sh@10 -- # set +x 00:07:49.211 10:59:46 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:07:49.211 10:59:46 -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:49.211 10:59:46 -- common/autotest_common.sh@10 -- # set +x 00:07:49.211 10:59:46 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:49.211 10:59:46 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:49.212 10:59:46 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:49.212 10:59:46 -- spdk/autotest.sh@387 -- # hash lcov 00:07:49.212 10:59:46 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:49.471 10:59:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:49.471 10:59:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:49.471 10:59:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.471 10:59:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.471 10:59:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.471 10:59:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.471 10:59:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.471 10:59:46 -- paths/export.sh@5 -- $ export PATH 00:07:49.471 10:59:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.471 10:59:46 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:49.471 10:59:46 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:49.471 10:59:46 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715763586.XXXXXX 00:07:49.471 10:59:46 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715763586.J3Lh4i 00:07:49.471 10:59:46 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:49.471 10:59:46 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:07:49.471 10:59:46 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:49.471 10:59:46 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:49.471 10:59:46 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:49.471 10:59:46 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:49.471 10:59:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:07:49.471 10:59:46 -- common/autotest_common.sh@10 -- $ set +x 00:07:49.471 10:59:46 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:49.471 10:59:46 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:49.471 10:59:46 -- pm/common@17 -- $ local monitor 00:07:49.471 10:59:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.471 10:59:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.471 10:59:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.471 10:59:46 -- pm/common@21 -- $ date +%s 00:07:49.471 10:59:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.471 10:59:46 -- pm/common@21 -- $ date +%s 00:07:49.471 10:59:46 -- pm/common@25 -- $ sleep 1 00:07:49.471 10:59:46 -- pm/common@21 -- $ date +%s 00:07:49.471 10:59:46 -- pm/common@21 -- $ date +%s 00:07:49.471 10:59:46 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763586 00:07:49.471 10:59:46 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763586 00:07:49.471 10:59:46 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763586 00:07:49.471 10:59:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715763586 00:07:49.471 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763586_collect-vmstat.pm.log 00:07:49.471 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763586_collect-cpu-load.pm.log 00:07:49.471 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763586_collect-cpu-temp.pm.log 00:07:49.471 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715763586_collect-bmc-pm.bmc.pm.log 00:07:50.406 10:59:47 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:50.406 10:59:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:07:50.406 10:59:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.406 10:59:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:50.406 10:59:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:50.406 10:59:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:50.406 10:59:47 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:50.406 10:59:47 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:50.406 10:59:47 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:50.406 10:59:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:50.406 10:59:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:50.406 10:59:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:50.406 10:59:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:50.406 10:59:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.406 10:59:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:50.406 10:59:47 -- pm/common@44 -- $ pid=1421276 00:07:50.406 10:59:47 -- pm/common@50 -- $ kill -TERM 1421276 00:07:50.406 10:59:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.406 10:59:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:50.406 10:59:47 -- pm/common@44 -- $ pid=1421278 00:07:50.406 10:59:47 -- pm/common@50 -- $ kill -TERM 1421278 00:07:50.406 10:59:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.406 10:59:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:50.406 10:59:47 -- pm/common@44 -- $ pid=1421280 00:07:50.406 10:59:47 -- pm/common@50 -- $ kill -TERM 1421280 00:07:50.406 10:59:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.406 10:59:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:50.406 10:59:47 -- pm/common@44 -- $ pid=1421324 00:07:50.406 10:59:47 -- pm/common@50 -- $ sudo -E kill -TERM 1421324 00:07:50.406 + [[ -n 1270138 ]] 00:07:50.406 + sudo kill 1270138 00:07:50.675 [Pipeline] } 00:07:50.693 [Pipeline] // stage 00:07:50.699 [Pipeline] } 00:07:50.716 [Pipeline] // timeout 00:07:50.723 [Pipeline] } 00:07:50.745 [Pipeline] // catchError 00:07:50.751 [Pipeline] } 00:07:50.772 [Pipeline] // wrap 00:07:50.779 [Pipeline] } 00:07:50.795 [Pipeline] // catchError 00:07:50.804 [Pipeline] stage 00:07:50.806 [Pipeline] { (Epilogue) 00:07:50.821 [Pipeline] catchError 00:07:50.823 [Pipeline] { 00:07:50.838 [Pipeline] echo 00:07:50.840 Cleanup processes 00:07:50.846 [Pipeline] sh 00:07:51.180 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:51.180 1331562 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763244 00:07:51.180 1331610 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715763244 00:07:51.180 1421442 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:51.180 1422336 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:51.194 [Pipeline] sh 00:07:51.477 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:51.478 ++ grep -v 'sudo pgrep' 00:07:51.478 ++ awk '{print $1}' 00:07:51.478 + sudo kill -9 1331562 1331610 1421442 00:07:51.492 [Pipeline] sh 00:07:51.776 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:51.776 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:51.776 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:53.166 [Pipeline] sh 00:07:53.448 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:53.448 Artifacts sizes are good 00:07:53.464 [Pipeline] archiveArtifacts 00:07:53.471 Archiving artifacts 00:07:53.528 [Pipeline] sh 00:07:53.814 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:53.830 [Pipeline] cleanWs 00:07:53.840 [WS-CLEANUP] Deleting project workspace... 00:07:53.840 [WS-CLEANUP] Deferred wipeout is used... 00:07:53.846 [WS-CLEANUP] done 00:07:53.849 [Pipeline] } 00:07:53.876 [Pipeline] // catchError 00:07:53.891 [Pipeline] sh 00:07:54.175 + logger -p user.info -t JENKINS-CI 00:07:54.184 [Pipeline] } 00:07:54.201 [Pipeline] // stage 00:07:54.207 [Pipeline] } 00:07:54.229 [Pipeline] // node 00:07:54.235 [Pipeline] End of Pipeline 00:07:54.269 Finished: SUCCESS