00:00:00.001 Started by upstream project "autotest-per-patch" build number 122890 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.154 Using shallow fetch with depth 1 00:00:00.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.646 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.657 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.668 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.668 > git config core.sparsecheckout # timeout=10 00:00:04.679 > git read-tree -mu HEAD # timeout=10 00:00:04.692 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.708 Commit message: "inventory/dev: add missing long names" 00:00:04.708 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:04.781 [Pipeline] Start of Pipeline 00:00:04.797 [Pipeline] library 00:00:04.798 Loading library shm_lib@master 00:00:04.798 Library shm_lib@master is cached. Copying from home. 00:00:04.816 [Pipeline] node 00:00:04.834 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.835 [Pipeline] { 00:00:04.845 [Pipeline] catchError 00:00:04.846 [Pipeline] { 00:00:04.858 [Pipeline] wrap 00:00:04.869 [Pipeline] { 00:00:04.877 [Pipeline] stage 00:00:04.879 [Pipeline] { (Prologue) 00:00:05.083 [Pipeline] sh 00:00:05.364 + logger -p user.info -t JENKINS-CI 00:00:05.376 [Pipeline] echo 00:00:05.377 Node: WFP20 00:00:05.383 [Pipeline] sh 00:00:05.677 [Pipeline] setCustomBuildProperty 00:00:05.691 [Pipeline] echo 00:00:05.692 Cleanup processes 00:00:05.698 [Pipeline] sh 00:00:05.978 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.978 2268677 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.989 [Pipeline] sh 00:00:06.264 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.264 ++ grep -v 'sudo pgrep' 00:00:06.264 ++ awk '{print $1}' 00:00:06.264 + sudo kill -9 00:00:06.264 + true 00:00:06.279 [Pipeline] cleanWs 00:00:06.287 [WS-CLEANUP] Deleting project workspace... 00:00:06.287 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.293 [WS-CLEANUP] done 00:00:06.296 [Pipeline] setCustomBuildProperty 00:00:06.311 [Pipeline] sh 00:00:06.594 + sudo git config --global --replace-all safe.directory '*' 00:00:06.675 [Pipeline] nodesByLabel 00:00:06.676 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.689 [Pipeline] httpRequest 00:00:06.693 HttpMethod: GET 00:00:06.694 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.695 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.696 Response Code: HTTP/1.1 200 OK 00:00:06.697 Success: Status code 200 is in the accepted range: 200,404 00:00:06.697 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.105 [Pipeline] sh 00:00:07.384 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.401 [Pipeline] httpRequest 00:00:07.406 HttpMethod: GET 00:00:07.406 URL: http://10.211.164.101/packages/spdk_95a28e5018021aee444e964fedde3d40ced5d653.tar.gz 00:00:07.407 Sending request to url: http://10.211.164.101/packages/spdk_95a28e5018021aee444e964fedde3d40ced5d653.tar.gz 00:00:07.407 Response Code: HTTP/1.1 200 OK 00:00:07.408 Success: Status code 200 is in the accepted range: 200,404 00:00:07.408 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_95a28e5018021aee444e964fedde3d40ced5d653.tar.gz 00:00:25.459 [Pipeline] sh 00:00:25.740 + tar --no-same-owner -xf spdk_95a28e5018021aee444e964fedde3d40ced5d653.tar.gz 00:00:28.289 [Pipeline] sh 00:00:28.573 + git -C spdk log --oneline -n5 00:00:28.573 95a28e501 lvol: add lvol set external parent 00:00:28.573 3216253e6 lvol: add lvol set parent 00:00:28.573 567565736 blob: add blob set external parent 00:00:28.573 0e4f7fc9b blob: add blob set parent 00:00:28.573 4506c0c36 test/common: Enable inherit_errexit 00:00:28.586 [Pipeline] } 00:00:28.603 [Pipeline] // stage 00:00:28.611 [Pipeline] stage 00:00:28.614 [Pipeline] { (Prepare) 00:00:28.633 [Pipeline] writeFile 00:00:28.663 [Pipeline] sh 00:00:28.954 + logger -p user.info -t JENKINS-CI 00:00:28.968 [Pipeline] sh 00:00:29.251 + logger -p user.info -t JENKINS-CI 00:00:29.263 [Pipeline] sh 00:00:29.547 + cat autorun-spdk.conf 00:00:29.547 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.547 SPDK_TEST_FUZZER_SHORT=1 00:00:29.547 SPDK_TEST_FUZZER=1 00:00:29.547 SPDK_RUN_UBSAN=1 00:00:29.554 RUN_NIGHTLY=0 00:00:29.559 [Pipeline] readFile 00:00:29.584 [Pipeline] withEnv 00:00:29.586 [Pipeline] { 00:00:29.600 [Pipeline] sh 00:00:29.887 + set -ex 00:00:29.887 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:29.887 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:29.887 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.887 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:29.887 ++ SPDK_TEST_FUZZER=1 00:00:29.887 ++ SPDK_RUN_UBSAN=1 00:00:29.887 ++ RUN_NIGHTLY=0 00:00:29.887 + case $SPDK_TEST_NVMF_NICS in 00:00:29.887 + DRIVERS= 00:00:29.887 + [[ -n '' ]] 00:00:29.887 + exit 0 00:00:29.896 [Pipeline] } 00:00:29.914 [Pipeline] // withEnv 00:00:29.919 [Pipeline] } 00:00:29.936 [Pipeline] // stage 00:00:29.946 [Pipeline] catchError 00:00:29.949 [Pipeline] { 00:00:29.966 [Pipeline] timeout 00:00:29.966 Timeout set to expire in 30 min 00:00:29.968 [Pipeline] { 00:00:29.984 [Pipeline] stage 00:00:29.986 [Pipeline] { (Tests) 00:00:30.002 [Pipeline] sh 00:00:30.285 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.285 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.285 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.285 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:30.285 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:30.285 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:30.285 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:30.285 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:30.285 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:30.285 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:30.285 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:30.285 + source /etc/os-release 00:00:30.285 ++ NAME='Fedora Linux' 00:00:30.285 ++ VERSION='38 (Cloud Edition)' 00:00:30.285 ++ ID=fedora 00:00:30.285 ++ VERSION_ID=38 00:00:30.285 ++ VERSION_CODENAME= 00:00:30.285 ++ PLATFORM_ID=platform:f38 00:00:30.285 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:30.285 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:30.285 ++ LOGO=fedora-logo-icon 00:00:30.285 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:30.285 ++ HOME_URL=https://fedoraproject.org/ 00:00:30.285 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:30.285 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:30.285 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:30.286 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:30.286 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:30.286 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:30.286 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:30.286 ++ SUPPORT_END=2024-05-14 00:00:30.286 ++ VARIANT='Cloud Edition' 00:00:30.286 ++ VARIANT_ID=cloud 00:00:30.286 + uname -a 00:00:30.286 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:30.286 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:32.821 Hugepages 00:00:32.821 node hugesize free / total 00:00:32.821 node0 1048576kB 0 / 0 00:00:32.821 node0 2048kB 0 / 0 00:00:32.821 node1 1048576kB 0 / 0 00:00:32.821 node1 2048kB 0 / 0 00:00:32.821 00:00:32.821 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.821 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:32.821 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:32.821 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:32.821 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:32.821 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:32.821 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:32.822 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:32.822 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:32.822 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:32.822 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:32.822 + rm -f /tmp/spdk-ld-path 00:00:32.822 + source autorun-spdk.conf 00:00:32.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.822 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:32.822 ++ SPDK_TEST_FUZZER=1 00:00:32.822 ++ SPDK_RUN_UBSAN=1 00:00:32.822 ++ RUN_NIGHTLY=0 00:00:32.822 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.822 + [[ -n '' ]] 00:00:32.822 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.822 + for M in /var/spdk/build-*-manifest.txt 00:00:32.822 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.822 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.822 + for M in /var/spdk/build-*-manifest.txt 00:00:32.822 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.822 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.822 ++ uname 00:00:32.822 + [[ Linux == \L\i\n\u\x ]] 00:00:32.822 + sudo dmesg -T 00:00:32.822 + sudo dmesg --clear 00:00:32.822 + dmesg_pid=2270116 00:00:32.822 + [[ Fedora Linux == FreeBSD ]] 00:00:32.822 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.822 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.822 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.822 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.822 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.822 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.822 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.822 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.822 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.822 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.822 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.822 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.822 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.822 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.822 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:32.822 + sudo dmesg -Tw 00:00:32.822 Test configuration: 00:00:32.822 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.822 SPDK_TEST_FUZZER_SHORT=1 00:00:32.822 SPDK_TEST_FUZZER=1 00:00:32.822 SPDK_RUN_UBSAN=1 00:00:33.082 RUN_NIGHTLY=0 12:21:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:33.082 12:21:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.082 12:21:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.082 12:21:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.082 12:21:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.082 12:21:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.082 12:21:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.082 12:21:17 -- paths/export.sh@5 -- $ export PATH 00:00:33.082 12:21:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.082 12:21:17 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:33.082 12:21:17 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:33.082 12:21:17 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715768477.XXXXXX 00:00:33.082 12:21:17 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715768477.cdJy7M 00:00:33.082 12:21:17 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:33.082 12:21:17 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:33.082 12:21:17 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:33.082 12:21:17 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.082 12:21:17 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.082 12:21:17 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:33.082 12:21:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:33.082 12:21:17 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.082 12:21:17 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.082 12:21:17 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:33.082 12:21:17 -- pm/common@17 -- $ local monitor 00:00:33.082 12:21:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.082 12:21:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.082 12:21:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.082 12:21:17 -- pm/common@21 -- $ date +%s 00:00:33.082 12:21:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.082 12:21:17 -- pm/common@21 -- $ date +%s 00:00:33.082 12:21:17 -- pm/common@25 -- $ sleep 1 00:00:33.082 12:21:17 -- pm/common@21 -- $ date +%s 00:00:33.082 12:21:17 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715768477 00:00:33.082 12:21:17 -- pm/common@21 -- $ date +%s 00:00:33.082 12:21:17 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715768477 00:00:33.082 12:21:17 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715768477 00:00:33.083 12:21:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715768477 00:00:33.083 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715768477_collect-cpu-temp.pm.log 00:00:33.083 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715768477_collect-vmstat.pm.log 00:00:33.083 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715768477_collect-cpu-load.pm.log 00:00:33.083 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715768477_collect-bmc-pm.bmc.pm.log 00:00:34.021 12:21:18 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:34.021 12:21:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.021 12:21:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.021 12:21:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:34.021 12:21:18 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.021 Wed May 15 10:21:18 AM UTC 2024 00:00:34.021 12:21:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.021 v24.05-pre-662-g95a28e501 00:00:34.021 12:21:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.021 12:21:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.021 12:21:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.021 12:21:18 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:34.021 12:21:18 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:34.021 12:21:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.021 ************************************ 00:00:34.021 START TEST ubsan 00:00:34.021 ************************************ 00:00:34.021 12:21:18 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:34.021 using ubsan 00:00:34.021 00:00:34.021 real 0m0.001s 00:00:34.021 user 0m0.000s 00:00:34.021 sys 0m0.000s 00:00:34.021 12:21:18 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:34.021 12:21:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.021 ************************************ 00:00:34.021 END TEST ubsan 00:00:34.021 ************************************ 00:00:34.281 12:21:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.281 12:21:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.281 12:21:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.281 12:21:18 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:34.281 12:21:18 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:34.281 12:21:18 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:34.281 12:21:18 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:00:34.281 12:21:18 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:34.281 12:21:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.281 ************************************ 00:00:34.281 START TEST autobuild_llvm_precompile 00:00:34.281 ************************************ 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ _llvm_precompile 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:34.281 Target: x86_64-redhat-linux-gnu 00:00:34.281 Thread model: posix 00:00:34.281 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:34.281 12:21:18 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:34.541 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:34.541 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:34.800 Using 'verbs' RDMA provider 00:00:50.666 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:05.576 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:05.576 Creating mk/config.mk...done. 00:01:05.576 Creating mk/cc.flags.mk...done. 00:01:05.576 Type 'make' to build. 00:01:05.576 00:01:05.576 real 0m29.633s 00:01:05.576 user 0m12.631s 00:01:05.576 sys 0m16.343s 00:01:05.576 12:21:48 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:05.577 12:21:48 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:05.577 ************************************ 00:01:05.577 END TEST autobuild_llvm_precompile 00:01:05.577 ************************************ 00:01:05.577 12:21:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.577 12:21:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.577 12:21:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.577 12:21:48 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:05.577 12:21:48 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:05.577 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:05.577 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:05.577 Using 'verbs' RDMA provider 00:01:17.797 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.004 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.004 Creating mk/config.mk...done. 00:01:30.004 Creating mk/cc.flags.mk...done. 00:01:30.004 Type 'make' to build. 00:01:30.004 12:22:13 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:30.005 12:22:13 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:30.005 12:22:13 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:30.005 12:22:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.005 ************************************ 00:01:30.005 START TEST make 00:01:30.005 ************************************ 00:01:30.005 12:22:14 make -- common/autotest_common.sh@1122 -- $ make -j112 00:01:30.005 make[1]: Nothing to be done for 'all'. 00:01:31.385 The Meson build system 00:01:31.385 Version: 1.3.1 00:01:31.385 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:31.385 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.385 Build type: native build 00:01:31.385 Project name: libvfio-user 00:01:31.385 Project version: 0.0.1 00:01:31.385 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:31.385 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:31.385 Host machine cpu family: x86_64 00:01:31.385 Host machine cpu: x86_64 00:01:31.385 Run-time dependency threads found: YES 00:01:31.385 Library dl found: YES 00:01:31.385 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.385 Run-time dependency json-c found: YES 0.17 00:01:31.385 Run-time dependency cmocka found: YES 1.1.7 00:01:31.385 Program pytest-3 found: NO 00:01:31.385 Program flake8 found: NO 00:01:31.385 Program misspell-fixer found: NO 00:01:31.385 Program restructuredtext-lint found: NO 00:01:31.385 Program valgrind found: YES (/usr/bin/valgrind) 00:01:31.385 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.385 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.385 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.385 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.385 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:31.385 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:31.385 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.385 Build targets in project: 8 00:01:31.385 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:31.385 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:31.385 00:01:31.385 libvfio-user 0.0.1 00:01:31.385 00:01:31.385 User defined options 00:01:31.385 buildtype : debug 00:01:31.385 default_library: static 00:01:31.385 libdir : /usr/local/lib 00:01:31.385 00:01:31.385 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.954 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.954 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:31.954 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.954 [3/36] Compiling C object samples/null.p/null.c.o 00:01:31.954 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.954 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.954 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.954 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:31.954 [8/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:31.954 [9/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.955 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:31.955 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.955 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.955 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.955 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.955 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:31.955 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:31.955 [17/36] Compiling C object samples/server.p/server.c.o 00:01:31.955 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:31.955 [19/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.955 [20/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.955 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.955 [22/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.955 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.955 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.955 [25/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.955 [26/36] Compiling C object samples/client.p/client.c.o 00:01:31.955 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:31.955 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:31.955 [29/36] Linking static target lib/libvfio-user.a 00:01:31.955 [30/36] Linking target samples/client 00:01:31.955 [31/36] Linking target test/unit_tests 00:01:31.955 [32/36] Linking target samples/null 00:01:31.955 [33/36] Linking target samples/shadow_ioeventfd_server 00:01:31.955 [34/36] Linking target samples/lspci 00:01:31.955 [35/36] Linking target samples/gpio-pci-idio-16 00:01:31.955 [36/36] Linking target samples/server 00:01:32.214 INFO: autodetecting backend as ninja 00:01:32.214 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.214 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.474 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.474 ninja: no work to do. 00:01:37.749 The Meson build system 00:01:37.749 Version: 1.3.1 00:01:37.749 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:37.749 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:37.749 Build type: native build 00:01:37.749 Program cat found: YES (/usr/bin/cat) 00:01:37.749 Project name: DPDK 00:01:37.749 Project version: 23.11.0 00:01:37.749 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:37.749 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:37.749 Host machine cpu family: x86_64 00:01:37.749 Host machine cpu: x86_64 00:01:37.749 Message: ## Building in Developer Mode ## 00:01:37.749 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.749 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.749 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.749 Program python3 found: YES (/usr/bin/python3) 00:01:37.749 Program cat found: YES (/usr/bin/cat) 00:01:37.749 Compiler for C supports arguments -march=native: YES 00:01:37.749 Checking for size of "void *" : 8 00:01:37.749 Checking for size of "void *" : 8 (cached) 00:01:37.749 Library m found: YES 00:01:37.749 Library numa found: YES 00:01:37.749 Has header "numaif.h" : YES 00:01:37.749 Library fdt found: NO 00:01:37.749 Library execinfo found: NO 00:01:37.749 Has header "execinfo.h" : YES 00:01:37.749 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.749 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.749 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.749 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.749 Run-time dependency openssl found: YES 3.0.9 00:01:37.749 Run-time dependency libpcap found: YES 1.10.4 00:01:37.749 Has header "pcap.h" with dependency libpcap: YES 00:01:37.749 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.749 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.749 Compiler for C supports arguments -Wformat: YES 00:01:37.749 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:37.749 Compiler for C supports arguments -Wformat-security: YES 00:01:37.749 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.749 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.749 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.749 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.749 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.749 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.749 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.749 Compiler for C supports arguments -Wundef: YES 00:01:37.749 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.749 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.749 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:37.749 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.749 Program objdump found: YES (/usr/bin/objdump) 00:01:37.749 Compiler for C supports arguments -mavx512f: YES 00:01:37.749 Checking if "AVX512 checking" compiles: YES 00:01:37.749 Fetching value of define "__SSE4_2__" : 1 00:01:37.749 Fetching value of define "__AES__" : 1 00:01:37.749 Fetching value of define "__AVX__" : 1 00:01:37.749 Fetching value of define "__AVX2__" : 1 00:01:37.749 Fetching value of define "__AVX512BW__" : 1 00:01:37.749 Fetching value of define "__AVX512CD__" : 1 00:01:37.749 Fetching value of define "__AVX512DQ__" : 1 00:01:37.749 Fetching value of define "__AVX512F__" : 1 00:01:37.749 Fetching value of define "__AVX512VL__" : 1 00:01:37.749 Fetching value of define "__PCLMUL__" : 1 00:01:37.749 Fetching value of define "__RDRND__" : 1 00:01:37.749 Fetching value of define "__RDSEED__" : 1 00:01:37.749 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.749 Fetching value of define "__znver1__" : (undefined) 00:01:37.749 Fetching value of define "__znver2__" : (undefined) 00:01:37.749 Fetching value of define "__znver3__" : (undefined) 00:01:37.749 Fetching value of define "__znver4__" : (undefined) 00:01:37.749 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:37.749 Message: lib/log: Defining dependency "log" 00:01:37.749 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.749 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.749 Checking for function "getentropy" : NO 00:01:37.749 Message: lib/eal: Defining dependency "eal" 00:01:37.749 Message: lib/ring: Defining dependency "ring" 00:01:37.749 Message: lib/rcu: Defining dependency "rcu" 00:01:37.749 Message: lib/mempool: Defining dependency "mempool" 00:01:37.749 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.749 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.749 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.749 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.749 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:37.749 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:37.749 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:37.749 Compiler for C supports arguments -mpclmul: YES 00:01:37.749 Compiler for C supports arguments -maes: YES 00:01:37.749 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.749 Compiler for C supports arguments -mavx512bw: YES 00:01:37.749 Compiler for C supports arguments -mavx512dq: YES 00:01:37.749 Compiler for C supports arguments -mavx512vl: YES 00:01:37.749 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.749 Compiler for C supports arguments -mavx2: YES 00:01:37.749 Compiler for C supports arguments -mavx: YES 00:01:37.749 Message: lib/net: Defining dependency "net" 00:01:37.749 Message: lib/meter: Defining dependency "meter" 00:01:37.749 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.749 Message: lib/pci: Defining dependency "pci" 00:01:37.749 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.749 Message: lib/hash: Defining dependency "hash" 00:01:37.749 Message: lib/timer: Defining dependency "timer" 00:01:37.749 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.749 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.749 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.749 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.749 Message: lib/power: Defining dependency "power" 00:01:37.749 Message: lib/reorder: Defining dependency "reorder" 00:01:37.749 Message: lib/security: Defining dependency "security" 00:01:37.749 Has header "linux/userfaultfd.h" : YES 00:01:37.749 Has header "linux/vduse.h" : YES 00:01:37.749 Message: lib/vhost: Defining dependency "vhost" 00:01:37.749 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:37.749 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.749 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.749 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.749 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.749 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.749 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.749 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.749 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.749 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.749 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.749 Configuring doxy-api-html.conf using configuration 00:01:37.749 Configuring doxy-api-man.conf using configuration 00:01:37.750 Program mandb found: YES (/usr/bin/mandb) 00:01:37.750 Program sphinx-build found: NO 00:01:37.750 Configuring rte_build_config.h using configuration 00:01:37.750 Message: 00:01:37.750 ================= 00:01:37.750 Applications Enabled 00:01:37.750 ================= 00:01:37.750 00:01:37.750 apps: 00:01:37.750 00:01:37.750 00:01:37.750 Message: 00:01:37.750 ================= 00:01:37.750 Libraries Enabled 00:01:37.750 ================= 00:01:37.750 00:01:37.750 libs: 00:01:37.750 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.750 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.750 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.750 00:01:37.750 Message: 00:01:37.750 =============== 00:01:37.750 Drivers Enabled 00:01:37.750 =============== 00:01:37.750 00:01:37.750 common: 00:01:37.750 00:01:37.750 bus: 00:01:37.750 pci, vdev, 00:01:37.750 mempool: 00:01:37.750 ring, 00:01:37.750 dma: 00:01:37.750 00:01:37.750 net: 00:01:37.750 00:01:37.750 crypto: 00:01:37.750 00:01:37.750 compress: 00:01:37.750 00:01:37.750 vdpa: 00:01:37.750 00:01:37.750 00:01:37.750 Message: 00:01:37.750 ================= 00:01:37.750 Content Skipped 00:01:37.750 ================= 00:01:37.750 00:01:37.750 apps: 00:01:37.750 dumpcap: explicitly disabled via build config 00:01:37.750 graph: explicitly disabled via build config 00:01:37.750 pdump: explicitly disabled via build config 00:01:37.750 proc-info: explicitly disabled via build config 00:01:37.750 test-acl: explicitly disabled via build config 00:01:37.750 test-bbdev: explicitly disabled via build config 00:01:37.750 test-cmdline: explicitly disabled via build config 00:01:37.750 test-compress-perf: explicitly disabled via build config 00:01:37.750 test-crypto-perf: explicitly disabled via build config 00:01:37.750 test-dma-perf: explicitly disabled via build config 00:01:37.750 test-eventdev: explicitly disabled via build config 00:01:37.750 test-fib: explicitly disabled via build config 00:01:37.750 test-flow-perf: explicitly disabled via build config 00:01:37.750 test-gpudev: explicitly disabled via build config 00:01:37.750 test-mldev: explicitly disabled via build config 00:01:37.750 test-pipeline: explicitly disabled via build config 00:01:37.750 test-pmd: explicitly disabled via build config 00:01:37.750 test-regex: explicitly disabled via build config 00:01:37.750 test-sad: explicitly disabled via build config 00:01:37.750 test-security-perf: explicitly disabled via build config 00:01:37.750 00:01:37.750 libs: 00:01:37.750 metrics: explicitly disabled via build config 00:01:37.750 acl: explicitly disabled via build config 00:01:37.750 bbdev: explicitly disabled via build config 00:01:37.750 bitratestats: explicitly disabled via build config 00:01:37.750 bpf: explicitly disabled via build config 00:01:37.750 cfgfile: explicitly disabled via build config 00:01:37.750 distributor: explicitly disabled via build config 00:01:37.750 efd: explicitly disabled via build config 00:01:37.750 eventdev: explicitly disabled via build config 00:01:37.750 dispatcher: explicitly disabled via build config 00:01:37.750 gpudev: explicitly disabled via build config 00:01:37.750 gro: explicitly disabled via build config 00:01:37.750 gso: explicitly disabled via build config 00:01:37.750 ip_frag: explicitly disabled via build config 00:01:37.750 jobstats: explicitly disabled via build config 00:01:37.750 latencystats: explicitly disabled via build config 00:01:37.750 lpm: explicitly disabled via build config 00:01:37.750 member: explicitly disabled via build config 00:01:37.750 pcapng: explicitly disabled via build config 00:01:37.750 rawdev: explicitly disabled via build config 00:01:37.750 regexdev: explicitly disabled via build config 00:01:37.750 mldev: explicitly disabled via build config 00:01:37.750 rib: explicitly disabled via build config 00:01:37.750 sched: explicitly disabled via build config 00:01:37.750 stack: explicitly disabled via build config 00:01:37.750 ipsec: explicitly disabled via build config 00:01:37.750 pdcp: explicitly disabled via build config 00:01:37.750 fib: explicitly disabled via build config 00:01:37.750 port: explicitly disabled via build config 00:01:37.750 pdump: explicitly disabled via build config 00:01:37.750 table: explicitly disabled via build config 00:01:37.750 pipeline: explicitly disabled via build config 00:01:37.750 graph: explicitly disabled via build config 00:01:37.750 node: explicitly disabled via build config 00:01:37.750 00:01:37.750 drivers: 00:01:37.750 common/cpt: not in enabled drivers build config 00:01:37.750 common/dpaax: not in enabled drivers build config 00:01:37.750 common/iavf: not in enabled drivers build config 00:01:37.750 common/idpf: not in enabled drivers build config 00:01:37.750 common/mvep: not in enabled drivers build config 00:01:37.750 common/octeontx: not in enabled drivers build config 00:01:37.750 bus/auxiliary: not in enabled drivers build config 00:01:37.750 bus/cdx: not in enabled drivers build config 00:01:37.750 bus/dpaa: not in enabled drivers build config 00:01:37.750 bus/fslmc: not in enabled drivers build config 00:01:37.750 bus/ifpga: not in enabled drivers build config 00:01:37.750 bus/platform: not in enabled drivers build config 00:01:37.750 bus/vmbus: not in enabled drivers build config 00:01:37.750 common/cnxk: not in enabled drivers build config 00:01:37.750 common/mlx5: not in enabled drivers build config 00:01:37.750 common/nfp: not in enabled drivers build config 00:01:37.750 common/qat: not in enabled drivers build config 00:01:37.750 common/sfc_efx: not in enabled drivers build config 00:01:37.750 mempool/bucket: not in enabled drivers build config 00:01:37.750 mempool/cnxk: not in enabled drivers build config 00:01:37.750 mempool/dpaa: not in enabled drivers build config 00:01:37.750 mempool/dpaa2: not in enabled drivers build config 00:01:37.750 mempool/octeontx: not in enabled drivers build config 00:01:37.750 mempool/stack: not in enabled drivers build config 00:01:37.750 dma/cnxk: not in enabled drivers build config 00:01:37.750 dma/dpaa: not in enabled drivers build config 00:01:37.750 dma/dpaa2: not in enabled drivers build config 00:01:37.750 dma/hisilicon: not in enabled drivers build config 00:01:37.750 dma/idxd: not in enabled drivers build config 00:01:37.750 dma/ioat: not in enabled drivers build config 00:01:37.750 dma/skeleton: not in enabled drivers build config 00:01:37.750 net/af_packet: not in enabled drivers build config 00:01:37.750 net/af_xdp: not in enabled drivers build config 00:01:37.750 net/ark: not in enabled drivers build config 00:01:37.750 net/atlantic: not in enabled drivers build config 00:01:37.750 net/avp: not in enabled drivers build config 00:01:37.750 net/axgbe: not in enabled drivers build config 00:01:37.750 net/bnx2x: not in enabled drivers build config 00:01:37.750 net/bnxt: not in enabled drivers build config 00:01:37.750 net/bonding: not in enabled drivers build config 00:01:37.750 net/cnxk: not in enabled drivers build config 00:01:37.750 net/cpfl: not in enabled drivers build config 00:01:37.750 net/cxgbe: not in enabled drivers build config 00:01:37.750 net/dpaa: not in enabled drivers build config 00:01:37.750 net/dpaa2: not in enabled drivers build config 00:01:37.750 net/e1000: not in enabled drivers build config 00:01:37.750 net/ena: not in enabled drivers build config 00:01:37.750 net/enetc: not in enabled drivers build config 00:01:37.750 net/enetfec: not in enabled drivers build config 00:01:37.750 net/enic: not in enabled drivers build config 00:01:37.750 net/failsafe: not in enabled drivers build config 00:01:37.750 net/fm10k: not in enabled drivers build config 00:01:37.750 net/gve: not in enabled drivers build config 00:01:37.750 net/hinic: not in enabled drivers build config 00:01:37.750 net/hns3: not in enabled drivers build config 00:01:37.750 net/i40e: not in enabled drivers build config 00:01:37.750 net/iavf: not in enabled drivers build config 00:01:37.750 net/ice: not in enabled drivers build config 00:01:37.750 net/idpf: not in enabled drivers build config 00:01:37.750 net/igc: not in enabled drivers build config 00:01:37.750 net/ionic: not in enabled drivers build config 00:01:37.750 net/ipn3ke: not in enabled drivers build config 00:01:37.750 net/ixgbe: not in enabled drivers build config 00:01:37.750 net/mana: not in enabled drivers build config 00:01:37.750 net/memif: not in enabled drivers build config 00:01:37.750 net/mlx4: not in enabled drivers build config 00:01:37.750 net/mlx5: not in enabled drivers build config 00:01:37.750 net/mvneta: not in enabled drivers build config 00:01:37.750 net/mvpp2: not in enabled drivers build config 00:01:37.750 net/netvsc: not in enabled drivers build config 00:01:37.750 net/nfb: not in enabled drivers build config 00:01:37.750 net/nfp: not in enabled drivers build config 00:01:37.750 net/ngbe: not in enabled drivers build config 00:01:37.750 net/null: not in enabled drivers build config 00:01:37.750 net/octeontx: not in enabled drivers build config 00:01:37.750 net/octeon_ep: not in enabled drivers build config 00:01:37.750 net/pcap: not in enabled drivers build config 00:01:37.750 net/pfe: not in enabled drivers build config 00:01:37.750 net/qede: not in enabled drivers build config 00:01:37.750 net/ring: not in enabled drivers build config 00:01:37.750 net/sfc: not in enabled drivers build config 00:01:37.750 net/softnic: not in enabled drivers build config 00:01:37.750 net/tap: not in enabled drivers build config 00:01:37.750 net/thunderx: not in enabled drivers build config 00:01:37.750 net/txgbe: not in enabled drivers build config 00:01:37.750 net/vdev_netvsc: not in enabled drivers build config 00:01:37.750 net/vhost: not in enabled drivers build config 00:01:37.750 net/virtio: not in enabled drivers build config 00:01:37.750 net/vmxnet3: not in enabled drivers build config 00:01:37.750 raw/*: missing internal dependency, "rawdev" 00:01:37.750 crypto/armv8: not in enabled drivers build config 00:01:37.750 crypto/bcmfs: not in enabled drivers build config 00:01:37.750 crypto/caam_jr: not in enabled drivers build config 00:01:37.750 crypto/ccp: not in enabled drivers build config 00:01:37.750 crypto/cnxk: not in enabled drivers build config 00:01:37.750 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.750 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.750 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.750 crypto/mlx5: not in enabled drivers build config 00:01:37.750 crypto/mvsam: not in enabled drivers build config 00:01:37.751 crypto/nitrox: not in enabled drivers build config 00:01:37.751 crypto/null: not in enabled drivers build config 00:01:37.751 crypto/octeontx: not in enabled drivers build config 00:01:37.751 crypto/openssl: not in enabled drivers build config 00:01:37.751 crypto/scheduler: not in enabled drivers build config 00:01:37.751 crypto/uadk: not in enabled drivers build config 00:01:37.751 crypto/virtio: not in enabled drivers build config 00:01:37.751 compress/isal: not in enabled drivers build config 00:01:37.751 compress/mlx5: not in enabled drivers build config 00:01:37.751 compress/octeontx: not in enabled drivers build config 00:01:37.751 compress/zlib: not in enabled drivers build config 00:01:37.751 regex/*: missing internal dependency, "regexdev" 00:01:37.751 ml/*: missing internal dependency, "mldev" 00:01:37.751 vdpa/ifc: not in enabled drivers build config 00:01:37.751 vdpa/mlx5: not in enabled drivers build config 00:01:37.751 vdpa/nfp: not in enabled drivers build config 00:01:37.751 vdpa/sfc: not in enabled drivers build config 00:01:37.751 event/*: missing internal dependency, "eventdev" 00:01:37.751 baseband/*: missing internal dependency, "bbdev" 00:01:37.751 gpu/*: missing internal dependency, "gpudev" 00:01:37.751 00:01:37.751 00:01:38.010 Build targets in project: 85 00:01:38.010 00:01:38.010 DPDK 23.11.0 00:01:38.010 00:01:38.010 User defined options 00:01:38.010 buildtype : debug 00:01:38.010 default_library : static 00:01:38.010 libdir : lib 00:01:38.010 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:38.010 c_args : -fPIC -Werror 00:01:38.010 c_link_args : 00:01:38.010 cpu_instruction_set: native 00:01:38.010 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:38.010 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:38.010 enable_docs : false 00:01:38.010 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:38.010 enable_kmods : false 00:01:38.010 tests : false 00:01:38.010 00:01:38.010 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.279 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:38.279 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.279 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.279 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.279 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.279 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.279 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.279 [7/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.279 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.279 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.279 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.279 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.279 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.279 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.279 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.279 [15/265] Linking static target lib/librte_kvargs.a 00:01:38.279 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.279 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.537 [18/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.537 [19/265] Linking static target lib/librte_log.a 00:01:38.537 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.537 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.537 [22/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.537 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.537 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.538 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.538 [26/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.538 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.538 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.538 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.538 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.538 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.538 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.538 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.538 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.538 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.538 [36/265] Linking static target lib/librte_pci.a 00:01:38.538 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.538 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.538 [39/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.538 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.538 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.795 [42/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.795 [43/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.795 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.795 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.795 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.795 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.795 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.795 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.795 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.795 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.795 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.795 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.795 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.795 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.795 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.795 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.795 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.795 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.795 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.795 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.795 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.795 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.795 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.795 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.795 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.795 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.795 [68/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.795 [69/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.795 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.795 [71/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.795 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.795 [73/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.795 [74/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.795 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.795 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.795 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.795 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.795 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.795 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.795 [81/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.795 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.795 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.795 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.795 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.053 [86/265] Linking static target lib/librte_telemetry.a 00:01:39.053 [87/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.053 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.053 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.053 [90/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.053 [91/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.053 [92/265] Linking static target lib/librte_meter.a 00:01:39.053 [93/265] Linking static target lib/librte_ring.a 00:01:39.053 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.053 [95/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.053 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.053 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.053 [98/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.053 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.053 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.053 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.053 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.053 [103/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.053 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.053 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.053 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.053 [107/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.053 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.054 [109/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.054 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.054 [111/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.054 [112/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.054 [113/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.054 [114/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.054 [115/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.054 [116/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.054 [117/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.054 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.054 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.054 [120/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.054 [121/265] Linking static target lib/librte_timer.a 00:01:39.054 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.054 [123/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.054 [124/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.054 [125/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.054 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.054 [127/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.054 [128/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.054 [129/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.054 [130/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.054 [131/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.054 [132/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.054 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.054 [134/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.054 [135/265] Linking static target lib/librte_net.a 00:01:39.054 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.054 [137/265] Linking static target lib/librte_mempool.a 00:01:39.054 [138/265] Linking static target lib/librte_dmadev.a 00:01:39.054 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.054 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.054 [141/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.054 [142/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.054 [143/265] Linking static target lib/librte_eal.a 00:01:39.054 [144/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.054 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.054 [146/265] Linking target lib/librte_log.so.24.0 00:01:39.054 [147/265] Linking static target lib/librte_cmdline.a 00:01:39.054 [148/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.054 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.054 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.054 [151/265] Linking static target lib/librte_power.a 00:01:39.054 [152/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.054 [153/265] Linking static target lib/librte_rcu.a 00:01:39.054 [154/265] Linking static target lib/librte_compressdev.a 00:01:39.054 [155/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.054 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.054 [157/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.054 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.054 [159/265] Linking static target lib/librte_mbuf.a 00:01:39.054 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.054 [161/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.054 [162/265] Linking static target lib/librte_hash.a 00:01:39.054 [163/265] Linking static target lib/librte_security.a 00:01:39.054 [164/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.054 [165/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.054 [166/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.313 [167/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:39.313 [168/265] Linking static target lib/librte_reorder.a 00:01:39.313 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.313 [170/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.313 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.313 [172/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.313 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.313 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.313 [175/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.313 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.313 [177/265] Linking target lib/librte_kvargs.so.24.0 00:01:39.313 [178/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.313 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.314 [180/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.314 [181/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.314 [182/265] Linking static target lib/librte_cryptodev.a 00:01:39.314 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.314 [184/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.314 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.314 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.314 [187/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.314 [188/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.314 [189/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.314 [190/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.314 [191/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.314 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.314 [193/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:39.573 [194/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [195/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [196/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [197/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.573 [198/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.573 [199/265] Linking target lib/librte_telemetry.so.24.0 00:01:39.573 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.573 [201/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [202/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.573 [203/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.573 [204/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.573 [205/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.573 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.573 [207/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.573 [208/265] Linking static target drivers/librte_bus_vdev.a 00:01:39.573 [209/265] Linking static target drivers/librte_bus_pci.a 00:01:39.573 [210/265] Linking static target drivers/librte_mempool_ring.a 00:01:39.573 [211/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.573 [212/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.573 [213/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [214/265] Linking static target lib/librte_ethdev.a 00:01:39.833 [215/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:39.833 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.833 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.093 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.093 [219/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.093 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.093 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.093 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.352 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.352 [224/265] Linking static target lib/librte_vhost.a 00:01:40.352 [225/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.352 [226/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.732 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.723 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.325 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.862 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.862 [231/265] Linking target lib/librte_eal.so.24.0 00:01:51.862 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:51.862 [233/265] Linking target lib/librte_pci.so.24.0 00:01:51.862 [234/265] Linking target lib/librte_ring.so.24.0 00:01:51.862 [235/265] Linking target lib/librte_meter.so.24.0 00:01:51.862 [236/265] Linking target lib/librte_timer.so.24.0 00:01:51.862 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:51.862 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:51.862 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:51.862 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:51.862 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:51.862 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:51.862 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:51.862 [244/265] Linking target lib/librte_mempool.so.24.0 00:01:51.862 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:51.862 [246/265] Linking target lib/librte_rcu.so.24.0 00:01:52.122 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:52.122 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:52.122 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:52.122 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:52.122 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:52.382 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:52.382 [253/265] Linking target lib/librte_net.so.24.0 00:01:52.382 [254/265] Linking target lib/librte_compressdev.so.24.0 00:01:52.382 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:52.382 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:52.382 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:52.382 [258/265] Linking target lib/librte_cmdline.so.24.0 00:01:52.382 [259/265] Linking target lib/librte_hash.so.24.0 00:01:52.641 [260/265] Linking target lib/librte_security.so.24.0 00:01:52.641 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:52.641 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:52.641 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:52.641 [264/265] Linking target lib/librte_power.so.24.0 00:01:52.641 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:52.641 INFO: autodetecting backend as ninja 00:01:52.641 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:53.581 CC lib/ut/ut.o 00:01:53.581 CC lib/log/log.o 00:01:53.581 CC lib/log/log_flags.o 00:01:53.581 CC lib/log/log_deprecated.o 00:01:53.581 CC lib/ut_mock/mock.o 00:01:53.841 LIB libspdk_ut.a 00:01:53.841 LIB libspdk_ut_mock.a 00:01:53.841 LIB libspdk_log.a 00:01:54.100 CC lib/dma/dma.o 00:01:54.100 CC lib/util/base64.o 00:01:54.100 CC lib/util/bit_array.o 00:01:54.100 CC lib/util/cpuset.o 00:01:54.100 CC lib/util/crc16.o 00:01:54.100 CC lib/util/crc32_ieee.o 00:01:54.100 CC lib/util/crc32.o 00:01:54.100 CC lib/util/crc32c.o 00:01:54.100 CC lib/util/dif.o 00:01:54.100 CC lib/util/crc64.o 00:01:54.100 CC lib/ioat/ioat.o 00:01:54.100 CXX lib/trace_parser/trace.o 00:01:54.100 CC lib/util/fd.o 00:01:54.100 CC lib/util/file.o 00:01:54.100 CC lib/util/hexlify.o 00:01:54.100 CC lib/util/iov.o 00:01:54.100 CC lib/util/math.o 00:01:54.100 CC lib/util/pipe.o 00:01:54.100 CC lib/util/strerror_tls.o 00:01:54.100 CC lib/util/string.o 00:01:54.100 CC lib/util/uuid.o 00:01:54.100 CC lib/util/fd_group.o 00:01:54.100 CC lib/util/xor.o 00:01:54.100 CC lib/util/zipf.o 00:01:54.359 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.359 CC lib/vfio_user/host/vfio_user.o 00:01:54.359 LIB libspdk_dma.a 00:01:54.359 LIB libspdk_ioat.a 00:01:54.359 LIB libspdk_vfio_user.a 00:01:54.359 LIB libspdk_util.a 00:01:54.619 LIB libspdk_trace_parser.a 00:01:54.619 CC lib/conf/conf.o 00:01:54.619 CC lib/vmd/vmd.o 00:01:54.619 CC lib/vmd/led.o 00:01:54.878 CC lib/json/json_util.o 00:01:54.878 CC lib/json/json_parse.o 00:01:54.878 CC lib/json/json_write.o 00:01:54.878 CC lib/rdma/common.o 00:01:54.878 CC lib/rdma/rdma_verbs.o 00:01:54.878 CC lib/idxd/idxd_user.o 00:01:54.878 CC lib/env_dpdk/env.o 00:01:54.878 CC lib/idxd/idxd.o 00:01:54.878 CC lib/env_dpdk/memory.o 00:01:54.878 CC lib/env_dpdk/threads.o 00:01:54.878 CC lib/env_dpdk/pci.o 00:01:54.878 CC lib/env_dpdk/init.o 00:01:54.878 CC lib/env_dpdk/pci_ioat.o 00:01:54.878 CC lib/env_dpdk/pci_virtio.o 00:01:54.878 CC lib/env_dpdk/pci_vmd.o 00:01:54.878 CC lib/env_dpdk/pci_idxd.o 00:01:54.878 CC lib/env_dpdk/pci_event.o 00:01:54.878 CC lib/env_dpdk/sigbus_handler.o 00:01:54.878 CC lib/env_dpdk/pci_dpdk.o 00:01:54.878 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.878 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:54.878 LIB libspdk_conf.a 00:01:54.878 LIB libspdk_json.a 00:01:54.878 LIB libspdk_rdma.a 00:01:55.138 LIB libspdk_idxd.a 00:01:55.138 LIB libspdk_vmd.a 00:01:55.138 CC lib/jsonrpc/jsonrpc_server.o 00:01:55.138 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:55.138 CC lib/jsonrpc/jsonrpc_client.o 00:01:55.138 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.397 LIB libspdk_jsonrpc.a 00:01:55.656 LIB libspdk_env_dpdk.a 00:01:55.656 CC lib/rpc/rpc.o 00:01:55.915 LIB libspdk_rpc.a 00:01:56.174 CC lib/notify/notify.o 00:01:56.174 CC lib/notify/notify_rpc.o 00:01:56.174 CC lib/trace/trace_flags.o 00:01:56.174 CC lib/trace/trace.o 00:01:56.174 CC lib/trace/trace_rpc.o 00:01:56.174 CC lib/keyring/keyring.o 00:01:56.174 CC lib/keyring/keyring_rpc.o 00:01:56.433 LIB libspdk_notify.a 00:01:56.433 LIB libspdk_trace.a 00:01:56.433 LIB libspdk_keyring.a 00:01:56.693 CC lib/sock/sock.o 00:01:56.693 CC lib/thread/thread.o 00:01:56.693 CC lib/sock/sock_rpc.o 00:01:56.693 CC lib/thread/iobuf.o 00:01:56.953 LIB libspdk_sock.a 00:01:57.212 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.212 CC lib/nvme/nvme_ctrlr.o 00:01:57.212 CC lib/nvme/nvme_fabric.o 00:01:57.212 CC lib/nvme/nvme_pcie_common.o 00:01:57.212 CC lib/nvme/nvme_ns_cmd.o 00:01:57.212 CC lib/nvme/nvme_ns.o 00:01:57.212 CC lib/nvme/nvme_pcie.o 00:01:57.212 CC lib/nvme/nvme_qpair.o 00:01:57.212 CC lib/nvme/nvme_quirks.o 00:01:57.212 CC lib/nvme/nvme.o 00:01:57.212 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.212 CC lib/nvme/nvme_transport.o 00:01:57.212 CC lib/nvme/nvme_discovery.o 00:01:57.212 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.212 CC lib/nvme/nvme_tcp.o 00:01:57.212 CC lib/nvme/nvme_opal.o 00:01:57.212 CC lib/nvme/nvme_io_msg.o 00:01:57.212 CC lib/nvme/nvme_poll_group.o 00:01:57.212 CC lib/nvme/nvme_auth.o 00:01:57.212 CC lib/nvme/nvme_zns.o 00:01:57.212 CC lib/nvme/nvme_stubs.o 00:01:57.212 CC lib/nvme/nvme_cuse.o 00:01:57.212 CC lib/nvme/nvme_vfio_user.o 00:01:57.212 CC lib/nvme/nvme_rdma.o 00:01:57.470 LIB libspdk_thread.a 00:01:57.729 CC lib/vfu_tgt/tgt_endpoint.o 00:01:57.729 CC lib/vfu_tgt/tgt_rpc.o 00:01:57.729 CC lib/blob/blobstore.o 00:01:57.729 CC lib/blob/zeroes.o 00:01:57.729 CC lib/blob/request.o 00:01:57.729 CC lib/blob/blob_bs_dev.o 00:01:57.729 CC lib/init/json_config.o 00:01:57.729 CC lib/virtio/virtio_vhost_user.o 00:01:57.729 CC lib/virtio/virtio.o 00:01:57.729 CC lib/accel/accel.o 00:01:57.729 CC lib/init/subsystem.o 00:01:57.729 CC lib/accel/accel_rpc.o 00:01:57.729 CC lib/virtio/virtio_vfio_user.o 00:01:57.729 CC lib/accel/accel_sw.o 00:01:57.729 CC lib/init/subsystem_rpc.o 00:01:57.729 CC lib/virtio/virtio_pci.o 00:01:57.729 CC lib/init/rpc.o 00:01:57.988 LIB libspdk_init.a 00:01:57.988 LIB libspdk_vfu_tgt.a 00:01:57.988 LIB libspdk_virtio.a 00:01:58.247 CC lib/event/reactor.o 00:01:58.247 CC lib/event/app.o 00:01:58.247 CC lib/event/log_rpc.o 00:01:58.247 CC lib/event/app_rpc.o 00:01:58.247 CC lib/event/scheduler_static.o 00:01:58.506 LIB libspdk_accel.a 00:01:58.506 LIB libspdk_event.a 00:01:58.506 LIB libspdk_nvme.a 00:01:58.766 CC lib/bdev/bdev_rpc.o 00:01:58.766 CC lib/bdev/bdev.o 00:01:58.766 CC lib/bdev/part.o 00:01:58.766 CC lib/bdev/bdev_zone.o 00:01:58.766 CC lib/bdev/scsi_nvme.o 00:01:59.334 LIB libspdk_blob.a 00:01:59.902 CC lib/blobfs/blobfs.o 00:01:59.902 CC lib/lvol/lvol.o 00:01:59.902 CC lib/blobfs/tree.o 00:02:00.162 LIB libspdk_lvol.a 00:02:00.162 LIB libspdk_blobfs.a 00:02:00.422 LIB libspdk_bdev.a 00:02:00.681 CC lib/scsi/dev.o 00:02:00.681 CC lib/scsi/lun.o 00:02:00.681 CC lib/scsi/scsi.o 00:02:00.681 CC lib/scsi/port.o 00:02:00.681 CC lib/nvmf/ctrlr.o 00:02:00.681 CC lib/nvmf/ctrlr_discovery.o 00:02:00.681 CC lib/scsi/scsi_bdev.o 00:02:00.681 CC lib/nvmf/ctrlr_bdev.o 00:02:00.681 CC lib/nvmf/subsystem.o 00:02:00.681 CC lib/nvmf/nvmf.o 00:02:00.681 CC lib/scsi/scsi_pr.o 00:02:00.681 CC lib/nvmf/nvmf_rpc.o 00:02:00.681 CC lib/scsi/scsi_rpc.o 00:02:00.681 CC lib/scsi/task.o 00:02:00.681 CC lib/nvmf/transport.o 00:02:00.681 CC lib/ftl/ftl_init.o 00:02:00.681 CC lib/nvmf/tcp.o 00:02:00.681 CC lib/ftl/ftl_core.o 00:02:00.681 CC lib/ftl/ftl_debug.o 00:02:00.681 CC lib/nvmf/stubs.o 00:02:00.681 CC lib/nvmf/mdns_server.o 00:02:00.681 CC lib/ftl/ftl_layout.o 00:02:00.681 CC lib/nvmf/rdma.o 00:02:00.681 CC lib/nvmf/vfio_user.o 00:02:00.681 CC lib/nvmf/auth.o 00:02:00.681 CC lib/ftl/ftl_io.o 00:02:00.681 CC lib/ftl/ftl_sb.o 00:02:00.681 CC lib/ftl/ftl_l2p_flat.o 00:02:00.681 CC lib/ftl/ftl_l2p.o 00:02:00.681 CC lib/ftl/ftl_nv_cache.o 00:02:00.681 CC lib/ftl/ftl_band.o 00:02:00.681 CC lib/nbd/nbd.o 00:02:00.681 CC lib/ftl/ftl_band_ops.o 00:02:00.681 CC lib/nbd/nbd_rpc.o 00:02:00.681 CC lib/ftl/ftl_writer.o 00:02:00.681 CC lib/ftl/ftl_rq.o 00:02:00.681 CC lib/ftl/ftl_reloc.o 00:02:00.681 CC lib/ftl/ftl_l2p_cache.o 00:02:00.681 CC lib/ftl/ftl_p2l.o 00:02:00.681 CC lib/ublk/ublk.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.681 CC lib/ublk/ublk_rpc.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.681 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.681 CC lib/ftl/utils/ftl_conf.o 00:02:00.681 CC lib/ftl/utils/ftl_md.o 00:02:00.681 CC lib/ftl/utils/ftl_mempool.o 00:02:00.681 CC lib/ftl/utils/ftl_property.o 00:02:00.681 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.681 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.681 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.681 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.681 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.681 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.681 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.681 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.681 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.681 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.681 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.681 CC lib/ftl/base/ftl_base_dev.o 00:02:00.681 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.681 CC lib/ftl/ftl_trace.o 00:02:00.940 LIB libspdk_nbd.a 00:02:01.198 LIB libspdk_scsi.a 00:02:01.198 LIB libspdk_ublk.a 00:02:01.456 LIB libspdk_ftl.a 00:02:01.456 CC lib/vhost/vhost.o 00:02:01.456 CC lib/vhost/vhost_rpc.o 00:02:01.456 CC lib/vhost/vhost_scsi.o 00:02:01.456 CC lib/vhost/vhost_blk.o 00:02:01.456 CC lib/vhost/rte_vhost_user.o 00:02:01.456 CC lib/iscsi/conn.o 00:02:01.456 CC lib/iscsi/init_grp.o 00:02:01.456 CC lib/iscsi/iscsi.o 00:02:01.456 CC lib/iscsi/param.o 00:02:01.456 CC lib/iscsi/md5.o 00:02:01.456 CC lib/iscsi/portal_grp.o 00:02:01.456 CC lib/iscsi/iscsi_rpc.o 00:02:01.456 CC lib/iscsi/tgt_node.o 00:02:01.456 CC lib/iscsi/iscsi_subsystem.o 00:02:01.456 CC lib/iscsi/task.o 00:02:02.022 LIB libspdk_nvmf.a 00:02:02.022 LIB libspdk_vhost.a 00:02:02.281 LIB libspdk_iscsi.a 00:02:02.849 CC module/env_dpdk/env_dpdk_rpc.o 00:02:02.849 CC module/vfu_device/vfu_virtio.o 00:02:02.849 CC module/vfu_device/vfu_virtio_blk.o 00:02:02.849 CC module/vfu_device/vfu_virtio_scsi.o 00:02:02.849 CC module/vfu_device/vfu_virtio_rpc.o 00:02:02.849 LIB libspdk_env_dpdk_rpc.a 00:02:02.849 CC module/sock/posix/posix.o 00:02:02.849 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:02.849 CC module/accel/ioat/accel_ioat.o 00:02:02.849 CC module/accel/dsa/accel_dsa.o 00:02:02.849 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.849 CC module/blob/bdev/blob_bdev.o 00:02:02.849 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:02.849 CC module/accel/ioat/accel_ioat_rpc.o 00:02:02.849 CC module/accel/iaa/accel_iaa.o 00:02:02.849 CC module/accel/iaa/accel_iaa_rpc.o 00:02:02.849 CC module/scheduler/gscheduler/gscheduler.o 00:02:02.850 CC module/keyring/file/keyring.o 00:02:02.850 CC module/keyring/file/keyring_rpc.o 00:02:02.850 CC module/accel/error/accel_error.o 00:02:02.850 CC module/accel/error/accel_error_rpc.o 00:02:02.850 LIB libspdk_keyring_file.a 00:02:02.850 LIB libspdk_scheduler_gscheduler.a 00:02:02.850 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.108 LIB libspdk_accel_ioat.a 00:02:03.108 LIB libspdk_scheduler_dynamic.a 00:02:03.108 LIB libspdk_accel_iaa.a 00:02:03.108 LIB libspdk_accel_error.a 00:02:03.108 LIB libspdk_blob_bdev.a 00:02:03.108 LIB libspdk_accel_dsa.a 00:02:03.108 LIB libspdk_vfu_device.a 00:02:03.366 LIB libspdk_sock_posix.a 00:02:03.366 CC module/bdev/null/bdev_null.o 00:02:03.366 CC module/bdev/null/bdev_null_rpc.o 00:02:03.366 CC module/bdev/ftl/bdev_ftl.o 00:02:03.366 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.366 CC module/bdev/nvme/bdev_nvme.o 00:02:03.366 CC module/bdev/gpt/gpt.o 00:02:03.366 CC module/bdev/nvme/nvme_rpc.o 00:02:03.366 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.366 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.366 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.366 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.366 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.366 CC module/bdev/raid/bdev_raid.o 00:02:03.366 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.366 CC module/bdev/nvme/vbdev_opal.o 00:02:03.366 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.366 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.366 CC module/bdev/delay/vbdev_delay.o 00:02:03.366 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.366 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.366 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.366 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.366 CC module/bdev/raid/concat.o 00:02:03.366 CC module/bdev/raid/raid0.o 00:02:03.366 CC module/bdev/raid/raid1.o 00:02:03.366 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.366 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.366 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.366 CC module/bdev/split/vbdev_split.o 00:02:03.366 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.366 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.366 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.366 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.366 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.366 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.367 CC module/bdev/aio/bdev_aio.o 00:02:03.367 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.367 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.367 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.367 CC module/bdev/error/vbdev_error.o 00:02:03.367 CC module/bdev/malloc/bdev_malloc.o 00:02:03.367 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.624 LIB libspdk_blobfs_bdev.a 00:02:03.624 LIB libspdk_bdev_split.a 00:02:03.624 LIB libspdk_bdev_null.a 00:02:03.624 LIB libspdk_bdev_gpt.a 00:02:03.624 LIB libspdk_bdev_ftl.a 00:02:03.624 LIB libspdk_bdev_error.a 00:02:03.624 LIB libspdk_bdev_passthru.a 00:02:03.624 LIB libspdk_bdev_zone_block.a 00:02:03.624 LIB libspdk_bdev_aio.a 00:02:03.624 LIB libspdk_bdev_delay.a 00:02:03.624 LIB libspdk_bdev_iscsi.a 00:02:03.624 LIB libspdk_bdev_malloc.a 00:02:03.882 LIB libspdk_bdev_lvol.a 00:02:03.882 LIB libspdk_bdev_virtio.a 00:02:04.140 LIB libspdk_bdev_raid.a 00:02:04.706 LIB libspdk_bdev_nvme.a 00:02:05.272 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:05.272 CC module/event/subsystems/iobuf/iobuf.o 00:02:05.272 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:05.272 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:05.272 CC module/event/subsystems/vmd/vmd.o 00:02:05.272 CC module/event/subsystems/sock/sock.o 00:02:05.272 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:05.272 CC module/event/subsystems/keyring/keyring.o 00:02:05.272 CC module/event/subsystems/scheduler/scheduler.o 00:02:05.530 LIB libspdk_event_vfu_tgt.a 00:02:05.530 LIB libspdk_event_sock.a 00:02:05.530 LIB libspdk_event_vhost_blk.a 00:02:05.530 LIB libspdk_event_iobuf.a 00:02:05.530 LIB libspdk_event_keyring.a 00:02:05.530 LIB libspdk_event_vmd.a 00:02:05.530 LIB libspdk_event_scheduler.a 00:02:05.788 CC module/event/subsystems/accel/accel.o 00:02:05.788 LIB libspdk_event_accel.a 00:02:06.355 CC module/event/subsystems/bdev/bdev.o 00:02:06.355 LIB libspdk_event_bdev.a 00:02:06.613 CC module/event/subsystems/ublk/ublk.o 00:02:06.613 CC module/event/subsystems/nbd/nbd.o 00:02:06.613 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:06.613 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:06.613 CC module/event/subsystems/scsi/scsi.o 00:02:06.613 LIB libspdk_event_ublk.a 00:02:06.613 LIB libspdk_event_nbd.a 00:02:06.872 LIB libspdk_event_scsi.a 00:02:06.872 LIB libspdk_event_nvmf.a 00:02:07.131 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.131 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.131 LIB libspdk_event_vhost_scsi.a 00:02:07.131 LIB libspdk_event_iscsi.a 00:02:07.708 CC app/trace_record/trace_record.o 00:02:07.708 CC test/rpc_client/rpc_client_test.o 00:02:07.708 CC app/spdk_lspci/spdk_lspci.o 00:02:07.708 CC app/spdk_nvme_perf/perf.o 00:02:07.708 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.708 CC app/spdk_nvme_identify/identify.o 00:02:07.708 CXX app/trace/trace.o 00:02:07.708 TEST_HEADER include/spdk/accel.h 00:02:07.708 TEST_HEADER include/spdk/accel_module.h 00:02:07.708 CC app/spdk_top/spdk_top.o 00:02:07.708 TEST_HEADER include/spdk/assert.h 00:02:07.708 TEST_HEADER include/spdk/barrier.h 00:02:07.708 TEST_HEADER include/spdk/base64.h 00:02:07.708 TEST_HEADER include/spdk/bdev.h 00:02:07.708 TEST_HEADER include/spdk/bdev_module.h 00:02:07.708 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.708 TEST_HEADER include/spdk/bit_array.h 00:02:07.708 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.708 TEST_HEADER include/spdk/bit_pool.h 00:02:07.708 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.708 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.708 TEST_HEADER include/spdk/blobfs.h 00:02:07.708 TEST_HEADER include/spdk/blob.h 00:02:07.708 TEST_HEADER include/spdk/conf.h 00:02:07.708 TEST_HEADER include/spdk/config.h 00:02:07.708 TEST_HEADER include/spdk/cpuset.h 00:02:07.708 TEST_HEADER include/spdk/crc16.h 00:02:07.708 TEST_HEADER include/spdk/crc32.h 00:02:07.708 TEST_HEADER include/spdk/dif.h 00:02:07.708 TEST_HEADER include/spdk/crc64.h 00:02:07.708 TEST_HEADER include/spdk/dma.h 00:02:07.708 TEST_HEADER include/spdk/endian.h 00:02:07.708 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.708 TEST_HEADER include/spdk/env.h 00:02:07.708 TEST_HEADER include/spdk/fd_group.h 00:02:07.708 TEST_HEADER include/spdk/event.h 00:02:07.708 TEST_HEADER include/spdk/fd.h 00:02:07.708 TEST_HEADER include/spdk/file.h 00:02:07.708 TEST_HEADER include/spdk/ftl.h 00:02:07.708 TEST_HEADER include/spdk/hexlify.h 00:02:07.708 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.708 TEST_HEADER include/spdk/idxd.h 00:02:07.708 TEST_HEADER include/spdk/histogram_data.h 00:02:07.708 TEST_HEADER include/spdk/init.h 00:02:07.708 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.708 TEST_HEADER include/spdk/ioat.h 00:02:07.708 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.708 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.708 TEST_HEADER include/spdk/json.h 00:02:07.708 CC app/spdk_dd/spdk_dd.o 00:02:07.708 TEST_HEADER include/spdk/keyring.h 00:02:07.708 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.708 TEST_HEADER include/spdk/keyring_module.h 00:02:07.708 TEST_HEADER include/spdk/likely.h 00:02:07.708 CC app/vhost/vhost.o 00:02:07.708 TEST_HEADER include/spdk/log.h 00:02:07.708 TEST_HEADER include/spdk/lvol.h 00:02:07.708 TEST_HEADER include/spdk/memory.h 00:02:07.708 TEST_HEADER include/spdk/mmio.h 00:02:07.708 TEST_HEADER include/spdk/nbd.h 00:02:07.708 TEST_HEADER include/spdk/notify.h 00:02:07.708 TEST_HEADER include/spdk/nvme.h 00:02:07.708 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.708 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.708 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.708 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.708 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.708 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.708 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.708 TEST_HEADER include/spdk/nvmf.h 00:02:07.708 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.708 TEST_HEADER include/spdk/opal.h 00:02:07.708 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.708 CC app/nvmf_tgt/nvmf_main.o 00:02:07.708 TEST_HEADER include/spdk/opal_spec.h 00:02:07.708 TEST_HEADER include/spdk/pipe.h 00:02:07.708 TEST_HEADER include/spdk/pci_ids.h 00:02:07.708 TEST_HEADER include/spdk/queue.h 00:02:07.708 TEST_HEADER include/spdk/reduce.h 00:02:07.708 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.708 TEST_HEADER include/spdk/rpc.h 00:02:07.708 TEST_HEADER include/spdk/scheduler.h 00:02:07.708 TEST_HEADER include/spdk/scsi.h 00:02:07.708 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.708 TEST_HEADER include/spdk/sock.h 00:02:07.708 TEST_HEADER include/spdk/stdinc.h 00:02:07.708 TEST_HEADER include/spdk/string.h 00:02:07.708 TEST_HEADER include/spdk/trace.h 00:02:07.708 TEST_HEADER include/spdk/thread.h 00:02:07.708 TEST_HEADER include/spdk/tree.h 00:02:07.708 TEST_HEADER include/spdk/trace_parser.h 00:02:07.708 TEST_HEADER include/spdk/util.h 00:02:07.708 TEST_HEADER include/spdk/ublk.h 00:02:07.708 TEST_HEADER include/spdk/uuid.h 00:02:07.708 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.708 TEST_HEADER include/spdk/version.h 00:02:07.708 CC examples/nvme/hello_world/hello_world.o 00:02:07.708 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.708 CC app/spdk_tgt/spdk_tgt.o 00:02:07.708 TEST_HEADER include/spdk/vmd.h 00:02:07.708 TEST_HEADER include/spdk/vhost.h 00:02:07.708 TEST_HEADER include/spdk/zipf.h 00:02:07.708 TEST_HEADER include/spdk/xor.h 00:02:07.708 CXX test/cpp_headers/accel.o 00:02:07.708 CXX test/cpp_headers/accel_module.o 00:02:07.708 CC examples/ioat/perf/perf.o 00:02:07.708 CXX test/cpp_headers/assert.o 00:02:07.708 CXX test/cpp_headers/base64.o 00:02:07.708 CC examples/vmd/led/led.o 00:02:07.708 CXX test/cpp_headers/barrier.o 00:02:07.708 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.708 CC examples/nvme/reconnect/reconnect.o 00:02:07.708 CC examples/nvme/hotplug/hotplug.o 00:02:07.709 CXX test/cpp_headers/bdev.o 00:02:07.709 CC examples/nvme/abort/abort.o 00:02:07.709 CXX test/cpp_headers/bdev_zone.o 00:02:07.709 CXX test/cpp_headers/bdev_module.o 00:02:07.709 CXX test/cpp_headers/bit_array.o 00:02:07.709 CC examples/sock/hello_world/hello_sock.o 00:02:07.709 CXX test/cpp_headers/bit_pool.o 00:02:07.709 CXX test/cpp_headers/blob_bdev.o 00:02:07.709 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.709 CC examples/idxd/perf/perf.o 00:02:07.709 CXX test/cpp_headers/blobfs.o 00:02:07.709 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:07.709 CXX test/cpp_headers/blob.o 00:02:07.709 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:07.709 CC examples/vmd/lsvmd/lsvmd.o 00:02:07.709 CXX test/cpp_headers/conf.o 00:02:07.709 CXX test/cpp_headers/cpuset.o 00:02:07.709 CC examples/nvme/arbitration/arbitration.o 00:02:07.709 CXX test/cpp_headers/config.o 00:02:07.709 CC examples/accel/perf/accel_perf.o 00:02:07.709 CXX test/cpp_headers/crc16.o 00:02:07.709 CXX test/cpp_headers/crc32.o 00:02:07.709 CC examples/util/zipf/zipf.o 00:02:07.709 CXX test/cpp_headers/crc64.o 00:02:07.709 CXX test/cpp_headers/dif.o 00:02:07.709 CXX test/cpp_headers/dma.o 00:02:07.709 CC examples/ioat/verify/verify.o 00:02:07.709 CXX test/cpp_headers/endian.o 00:02:07.709 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.709 CXX test/cpp_headers/env_dpdk.o 00:02:07.709 CXX test/cpp_headers/env.o 00:02:07.709 CXX test/cpp_headers/event.o 00:02:07.709 CXX test/cpp_headers/fd_group.o 00:02:07.709 CC test/event/reactor_perf/reactor_perf.o 00:02:07.709 CXX test/cpp_headers/fd.o 00:02:07.709 CC test/event/reactor/reactor.o 00:02:07.709 CC examples/bdev/hello_world/hello_bdev.o 00:02:07.709 CXX test/cpp_headers/file.o 00:02:07.709 CXX test/cpp_headers/ftl.o 00:02:07.709 CXX test/cpp_headers/gpt_spec.o 00:02:07.709 CC test/app/histogram_perf/histogram_perf.o 00:02:07.709 CXX test/cpp_headers/hexlify.o 00:02:07.709 CXX test/cpp_headers/histogram_data.o 00:02:07.709 CXX test/cpp_headers/idxd.o 00:02:07.709 CXX test/cpp_headers/idxd_spec.o 00:02:07.709 CC test/event/event_perf/event_perf.o 00:02:07.709 CC examples/thread/thread/thread_ex.o 00:02:07.709 CXX test/cpp_headers/init.o 00:02:07.709 CC test/env/pci/pci_ut.o 00:02:07.709 CC test/env/memory/memory_ut.o 00:02:07.709 CC test/app/jsoncat/jsoncat.o 00:02:07.709 CC test/thread/lock/spdk_lock.o 00:02:07.709 CC test/thread/poller_perf/poller_perf.o 00:02:07.709 CC test/env/vtophys/vtophys.o 00:02:07.709 CC test/nvme/reset/reset.o 00:02:07.709 CC test/nvme/e2edp/nvme_dp.o 00:02:07.709 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.709 CC test/nvme/aer/aer.o 00:02:07.709 CC examples/blob/hello_world/hello_blob.o 00:02:07.709 CC test/nvme/sgl/sgl.o 00:02:07.709 CC test/nvme/startup/startup.o 00:02:07.709 CC app/fio/nvme/fio_plugin.o 00:02:07.709 CC test/app/stub/stub.o 00:02:07.709 CC test/nvme/reserve/reserve.o 00:02:07.709 CC test/nvme/compliance/nvme_compliance.o 00:02:07.709 CC test/nvme/fused_ordering/fused_ordering.o 00:02:07.709 CC test/nvme/simple_copy/simple_copy.o 00:02:07.709 CC test/nvme/fdp/fdp.o 00:02:07.709 CC test/nvme/cuse/cuse.o 00:02:07.709 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:07.709 CC test/event/app_repeat/app_repeat.o 00:02:07.709 CC test/nvme/err_injection/err_injection.o 00:02:07.709 CC test/nvme/connect_stress/connect_stress.o 00:02:07.709 CC test/nvme/overhead/overhead.o 00:02:07.709 CC examples/blob/cli/blobcli.o 00:02:07.709 CC test/nvme/boot_partition/boot_partition.o 00:02:07.709 CC examples/nvmf/nvmf/nvmf.o 00:02:07.709 LINK spdk_lspci 00:02:07.709 CXX test/cpp_headers/ioat.o 00:02:07.709 CC test/event/scheduler/scheduler.o 00:02:07.709 CC test/dma/test_dma/test_dma.o 00:02:07.709 CC test/blobfs/mkfs/mkfs.o 00:02:07.709 CC app/fio/bdev/fio_plugin.o 00:02:07.709 CC test/app/bdev_svc/bdev_svc.o 00:02:07.709 CC test/bdev/bdevio/bdevio.o 00:02:07.709 LINK rpc_client_test 00:02:07.709 CC test/accel/dif/dif.o 00:02:07.709 LINK spdk_nvme_discover 00:02:07.709 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.709 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.709 LINK interrupt_tgt 00:02:07.977 CC test/lvol/esnap/esnap.o 00:02:07.977 LINK lsvmd 00:02:07.977 LINK spdk_trace_record 00:02:07.977 LINK led 00:02:07.977 LINK reactor 00:02:07.977 LINK reactor_perf 00:02:07.977 LINK nvmf_tgt 00:02:07.977 LINK zipf 00:02:07.977 LINK jsoncat 00:02:07.977 CXX test/cpp_headers/ioat_spec.o 00:02:07.977 CXX test/cpp_headers/iscsi_spec.o 00:02:07.977 CXX test/cpp_headers/json.o 00:02:07.977 LINK event_perf 00:02:07.977 LINK histogram_perf 00:02:07.977 LINK vhost 00:02:07.977 LINK vtophys 00:02:07.977 CXX test/cpp_headers/jsonrpc.o 00:02:07.977 CXX test/cpp_headers/keyring.o 00:02:07.977 LINK poller_perf 00:02:07.977 CXX test/cpp_headers/keyring_module.o 00:02:07.977 CXX test/cpp_headers/likely.o 00:02:07.977 CXX test/cpp_headers/log.o 00:02:07.978 CXX test/cpp_headers/lvol.o 00:02:07.978 CXX test/cpp_headers/memory.o 00:02:07.978 CXX test/cpp_headers/mmio.o 00:02:07.978 CXX test/cpp_headers/nbd.o 00:02:07.978 CXX test/cpp_headers/notify.o 00:02:07.978 CXX test/cpp_headers/nvme.o 00:02:07.978 CXX test/cpp_headers/nvme_intel.o 00:02:07.978 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.978 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.978 LINK app_repeat 00:02:07.978 CXX test/cpp_headers/nvme_spec.o 00:02:07.978 CXX test/cpp_headers/nvme_zns.o 00:02:07.978 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.978 LINK iscsi_tgt 00:02:07.978 LINK env_dpdk_post_init 00:02:07.978 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.978 CXX test/cpp_headers/nvmf.o 00:02:07.978 CXX test/cpp_headers/nvmf_spec.o 00:02:07.978 CXX test/cpp_headers/nvmf_transport.o 00:02:07.978 CXX test/cpp_headers/opal.o 00:02:07.978 LINK pmr_persistence 00:02:07.978 CXX test/cpp_headers/opal_spec.o 00:02:07.978 LINK hello_world 00:02:07.978 LINK boot_partition 00:02:07.978 LINK spdk_tgt 00:02:07.978 LINK ioat_perf 00:02:07.978 CXX test/cpp_headers/pci_ids.o 00:02:07.978 LINK startup 00:02:07.978 CXX test/cpp_headers/pipe.o 00:02:07.978 CXX test/cpp_headers/queue.o 00:02:07.978 LINK doorbell_aers 00:02:07.978 CXX test/cpp_headers/reduce.o 00:02:07.978 LINK connect_stress 00:02:07.978 LINK cmb_copy 00:02:07.978 LINK verify 00:02:07.978 LINK err_injection 00:02:07.978 LINK stub 00:02:07.978 CXX test/cpp_headers/rpc.o 00:02:07.978 LINK hotplug 00:02:07.978 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:07.978 struct spdk_nvme_fdp_ruhs ruhs; 00:02:07.978 ^ 00:02:07.978 CXX test/cpp_headers/scheduler.o 00:02:07.978 LINK reserve 00:02:07.978 LINK fused_ordering 00:02:07.978 LINK hello_sock 00:02:07.978 CXX test/cpp_headers/scsi.o 00:02:07.978 LINK hello_blob 00:02:07.978 LINK thread 00:02:07.978 CXX test/cpp_headers/scsi_spec.o 00:02:07.978 LINK simple_copy 00:02:07.978 LINK bdev_svc 00:02:07.978 LINK hello_bdev 00:02:07.978 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.978 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.978 LINK mkfs 00:02:07.978 LINK nvme_dp 00:02:07.978 LINK scheduler 00:02:07.978 LINK sgl 00:02:07.978 LINK reset 00:02:07.978 LINK aer 00:02:07.978 LINK fdp 00:02:07.978 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:08.240 LINK spdk_trace 00:02:08.240 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:08.240 LINK idxd_perf 00:02:08.240 LINK overhead 00:02:08.240 LINK nvmf 00:02:08.240 LINK reconnect 00:02:08.240 CXX test/cpp_headers/sock.o 00:02:08.240 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.240 CXX test/cpp_headers/stdinc.o 00:02:08.240 LINK abort 00:02:08.240 CXX test/cpp_headers/string.o 00:02:08.240 CXX test/cpp_headers/thread.o 00:02:08.240 CXX test/cpp_headers/trace.o 00:02:08.240 CXX test/cpp_headers/trace_parser.o 00:02:08.240 CXX test/cpp_headers/tree.o 00:02:08.240 CXX test/cpp_headers/ublk.o 00:02:08.240 CXX test/cpp_headers/util.o 00:02:08.240 CXX test/cpp_headers/uuid.o 00:02:08.240 CXX test/cpp_headers/version.o 00:02:08.240 LINK arbitration 00:02:08.240 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.240 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.240 CXX test/cpp_headers/vhost.o 00:02:08.240 CXX test/cpp_headers/vmd.o 00:02:08.240 CXX test/cpp_headers/xor.o 00:02:08.240 CXX test/cpp_headers/zipf.o 00:02:08.240 LINK test_dma 00:02:08.240 LINK spdk_dd 00:02:08.240 LINK nvme_manage 00:02:08.240 LINK bdevio 00:02:08.240 LINK pci_ut 00:02:08.240 LINK dif 00:02:08.240 LINK accel_perf 00:02:08.240 LINK blobcli 00:02:08.500 LINK nvme_compliance 00:02:08.500 LINK mem_callbacks 00:02:08.500 LINK nvme_fuzz 00:02:08.500 1 warning generated. 00:02:08.500 LINK llvm_vfio_fuzz 00:02:08.500 LINK spdk_bdev 00:02:08.500 LINK spdk_nvme_identify 00:02:08.500 LINK bdevperf 00:02:08.500 LINK spdk_nvme 00:02:08.758 LINK spdk_nvme_perf 00:02:08.758 LINK memory_ut 00:02:08.758 LINK vhost_fuzz 00:02:08.758 LINK spdk_top 00:02:09.020 LINK cuse 00:02:09.020 LINK llvm_nvme_fuzz 00:02:09.358 LINK spdk_lock 00:02:09.358 LINK iscsi_fuzz 00:02:11.896 LINK esnap 00:02:11.896 00:02:11.896 real 0m42.367s 00:02:11.896 user 6m8.474s 00:02:11.896 sys 2m44.925s 00:02:11.896 12:22:56 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:11.896 12:22:56 make -- common/autotest_common.sh@10 -- $ set +x 00:02:11.896 ************************************ 00:02:11.896 END TEST make 00:02:11.896 ************************************ 00:02:11.896 12:22:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:11.896 12:22:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:11.896 12:22:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:11.896 12:22:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.896 12:22:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:11.896 12:22:56 -- pm/common@44 -- $ pid=2270153 00:02:11.896 12:22:56 -- pm/common@50 -- $ kill -TERM 2270153 00:02:11.896 12:22:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.896 12:22:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:11.896 12:22:56 -- pm/common@44 -- $ pid=2270155 00:02:11.896 12:22:56 -- pm/common@50 -- $ kill -TERM 2270155 00:02:11.896 12:22:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.896 12:22:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:11.896 12:22:56 -- pm/common@44 -- $ pid=2270157 00:02:11.896 12:22:56 -- pm/common@50 -- $ kill -TERM 2270157 00:02:11.896 12:22:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.896 12:22:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:11.896 12:22:56 -- pm/common@44 -- $ pid=2270187 00:02:11.896 12:22:56 -- pm/common@50 -- $ sudo -E kill -TERM 2270187 00:02:12.155 12:22:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.155 12:22:56 -- nvmf/common.sh@7 -- # uname -s 00:02:12.155 12:22:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.155 12:22:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.155 12:22:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.155 12:22:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.155 12:22:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.155 12:22:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.155 12:22:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.155 12:22:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.155 12:22:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.155 12:22:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.156 12:22:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:12.156 12:22:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:12.156 12:22:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.156 12:22:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.156 12:22:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:12.156 12:22:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:12.156 12:22:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:12.156 12:22:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.156 12:22:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.156 12:22:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.156 12:22:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.156 12:22:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.156 12:22:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.156 12:22:56 -- paths/export.sh@5 -- # export PATH 00:02:12.156 12:22:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.156 12:22:56 -- nvmf/common.sh@47 -- # : 0 00:02:12.156 12:22:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:12.156 12:22:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:12.156 12:22:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:12.156 12:22:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.156 12:22:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.156 12:22:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:12.156 12:22:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:12.156 12:22:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:12.156 12:22:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.156 12:22:56 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.156 12:22:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.156 12:22:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.156 12:22:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:12.156 12:22:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.156 12:22:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:12.156 12:22:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.156 12:22:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.156 12:22:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.156 12:22:56 -- spdk/autotest.sh@48 -- # udevadm_pid=2331614 00:02:12.156 12:22:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.156 12:22:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.156 12:22:56 -- pm/common@17 -- # local monitor 00:02:12.156 12:22:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.156 12:22:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.156 12:22:56 -- pm/common@21 -- # date +%s 00:02:12.156 12:22:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.156 12:22:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.156 12:22:56 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715768576 00:02:12.156 12:22:56 -- pm/common@21 -- # date +%s 00:02:12.156 12:22:56 -- pm/common@25 -- # sleep 1 00:02:12.156 12:22:56 -- pm/common@21 -- # date +%s 00:02:12.156 12:22:56 -- pm/common@21 -- # date +%s 00:02:12.156 12:22:56 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715768576 00:02:12.156 12:22:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715768576 00:02:12.156 12:22:56 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715768576 00:02:12.156 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715768576_collect-cpu-load.pm.log 00:02:12.156 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715768576_collect-vmstat.pm.log 00:02:12.156 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715768576_collect-cpu-temp.pm.log 00:02:12.156 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715768576_collect-bmc-pm.bmc.pm.log 00:02:13.093 12:22:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:13.093 12:22:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:13.093 12:22:57 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:13.093 12:22:57 -- common/autotest_common.sh@10 -- # set +x 00:02:13.093 12:22:57 -- spdk/autotest.sh@59 -- # create_test_list 00:02:13.093 12:22:57 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:13.093 12:22:57 -- common/autotest_common.sh@10 -- # set +x 00:02:13.352 12:22:57 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:13.352 12:22:57 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:13.352 12:22:57 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:13.352 12:22:57 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:13.352 12:22:57 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:13.352 12:22:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:13.352 12:22:57 -- common/autotest_common.sh@1452 -- # uname 00:02:13.352 12:22:57 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:13.352 12:22:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:13.352 12:22:57 -- common/autotest_common.sh@1472 -- # uname 00:02:13.353 12:22:57 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:13.353 12:22:57 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:13.353 12:22:57 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:13.353 12:22:57 -- spdk/autotest.sh@72 -- # hash lcov 00:02:13.353 12:22:57 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:13.353 12:22:57 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:13.353 12:22:57 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:13.353 12:22:57 -- common/autotest_common.sh@10 -- # set +x 00:02:13.353 12:22:57 -- spdk/autotest.sh@91 -- # rm -f 00:02:13.353 12:22:57 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:16.645 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:16.645 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:16.904 12:23:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:16.904 12:23:01 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:16.904 12:23:01 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:16.904 12:23:01 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:16.904 12:23:01 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:16.904 12:23:01 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:16.904 12:23:01 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:16.904 12:23:01 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:16.904 12:23:01 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:16.904 12:23:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:16.904 12:23:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:16.904 12:23:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:16.904 12:23:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:16.904 12:23:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:16.904 12:23:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:16.904 No valid GPT data, bailing 00:02:16.904 12:23:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:16.904 12:23:01 -- scripts/common.sh@391 -- # pt= 00:02:16.904 12:23:01 -- scripts/common.sh@392 -- # return 1 00:02:16.904 12:23:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:16.904 1+0 records in 00:02:16.904 1+0 records out 00:02:16.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547335 s, 192 MB/s 00:02:16.904 12:23:01 -- spdk/autotest.sh@118 -- # sync 00:02:16.904 12:23:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:16.904 12:23:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:16.904 12:23:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:23.475 12:23:07 -- spdk/autotest.sh@124 -- # uname -s 00:02:23.475 12:23:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:23.475 12:23:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.475 12:23:07 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:23.475 12:23:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:23.475 12:23:07 -- common/autotest_common.sh@10 -- # set +x 00:02:23.475 ************************************ 00:02:23.475 START TEST setup.sh 00:02:23.475 ************************************ 00:02:23.475 12:23:07 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.475 * Looking for test storage... 00:02:23.475 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:23.475 12:23:07 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:23.475 12:23:07 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:23.475 12:23:07 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:23.475 12:23:07 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:23.475 12:23:07 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:23.475 12:23:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:23.475 ************************************ 00:02:23.475 START TEST acl 00:02:23.475 ************************************ 00:02:23.475 12:23:08 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:23.475 * Looking for test storage... 00:02:23.475 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:23.475 12:23:08 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:23.475 12:23:08 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:23.475 12:23:08 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:23.734 12:23:08 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:23.734 12:23:08 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:23.734 12:23:08 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:23.734 12:23:08 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:23.734 12:23:08 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.734 12:23:08 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:23.734 12:23:08 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:23.734 12:23:08 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:23.734 12:23:08 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:23.734 12:23:08 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:23.734 12:23:08 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:23.734 12:23:08 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.734 12:23:08 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.929 12:23:11 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:27.929 12:23:11 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:27.929 12:23:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.929 12:23:11 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:27.929 12:23:11 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.929 12:23:11 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:30.464 Hugepages 00:02:30.464 node hugesize free / total 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 00:02:30.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.464 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:30.724 12:23:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:30.725 12:23:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:30.725 12:23:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.725 12:23:15 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:30.725 12:23:15 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:30.725 12:23:15 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:30.725 12:23:15 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:30.725 12:23:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.725 ************************************ 00:02:30.725 START TEST denied 00:02:30.725 ************************************ 00:02:30.725 12:23:15 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:30.725 12:23:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:30.725 12:23:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:30.725 12:23:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.725 12:23:15 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:30.725 12:23:15 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:34.919 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.919 12:23:18 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.111 00:02:39.112 real 0m7.804s 00:02:39.112 user 0m2.380s 00:02:39.112 sys 0m4.715s 00:02:39.112 12:23:23 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:39.112 12:23:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:39.112 ************************************ 00:02:39.112 END TEST denied 00:02:39.112 ************************************ 00:02:39.112 12:23:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:39.112 12:23:23 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:39.112 12:23:23 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:39.112 12:23:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:39.112 ************************************ 00:02:39.112 START TEST allowed 00:02:39.112 ************************************ 00:02:39.112 12:23:23 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:02:39.112 12:23:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:39.112 12:23:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:39.112 12:23:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:39.112 12:23:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.112 12:23:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:44.389 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.389 12:23:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:44.389 12:23:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:44.389 12:23:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:44.389 12:23:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.389 12:23:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.681 00:02:47.681 real 0m8.599s 00:02:47.681 user 0m2.464s 00:02:47.681 sys 0m4.667s 00:02:47.681 12:23:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:47.681 12:23:31 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:47.681 ************************************ 00:02:47.681 END TEST allowed 00:02:47.681 ************************************ 00:02:47.681 00:02:47.681 real 0m23.842s 00:02:47.681 user 0m7.496s 00:02:47.681 sys 0m14.425s 00:02:47.681 12:23:31 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:47.681 12:23:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.681 ************************************ 00:02:47.681 END TEST acl 00:02:47.681 ************************************ 00:02:47.681 12:23:31 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.681 12:23:31 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:47.681 12:23:31 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:47.681 12:23:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:47.681 ************************************ 00:02:47.681 START TEST hugepages 00:02:47.681 ************************************ 00:02:47.681 12:23:31 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.681 * Looking for test storage... 00:02:47.681 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.681 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40506932 kB' 'MemAvailable: 42152192 kB' 'Buffers: 3748 kB' 'Cached: 11289668 kB' 'SwapCached: 20048 kB' 'Active: 6988220 kB' 'Inactive: 4904208 kB' 'Active(anon): 6533640 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582380 kB' 'Mapped: 217536 kB' 'Shmem: 9154764 kB' 'KReclaimable: 307120 kB' 'Slab: 922992 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 615872 kB' 'KernelStack: 21936 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 11120636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216200 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.682 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.683 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:47.684 12:23:32 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:47.684 12:23:32 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:47.684 12:23:32 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:47.684 12:23:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.684 ************************************ 00:02:47.684 START TEST default_setup 00:02:47.684 ************************************ 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.684 12:23:32 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:50.983 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.983 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:52.436 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:52.436 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42736772 kB' 'MemAvailable: 44382032 kB' 'Buffers: 3748 kB' 'Cached: 11289804 kB' 'SwapCached: 20048 kB' 'Active: 7009172 kB' 'Inactive: 4904208 kB' 'Active(anon): 6554592 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603528 kB' 'Mapped: 218428 kB' 'Shmem: 9154900 kB' 'KReclaimable: 307120 kB' 'Slab: 921112 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613992 kB' 'KernelStack: 22080 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11140308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216348 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42737988 kB' 'MemAvailable: 44383248 kB' 'Buffers: 3748 kB' 'Cached: 11289804 kB' 'SwapCached: 20048 kB' 'Active: 7004440 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549860 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598344 kB' 'Mapped: 217988 kB' 'Shmem: 9154900 kB' 'KReclaimable: 307120 kB' 'Slab: 921148 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 614028 kB' 'KernelStack: 22144 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11132720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216376 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.438 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.439 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.703 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42737912 kB' 'MemAvailable: 44383172 kB' 'Buffers: 3748 kB' 'Cached: 11289824 kB' 'SwapCached: 20048 kB' 'Active: 7003628 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549048 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597492 kB' 'Mapped: 217648 kB' 'Shmem: 9154920 kB' 'KReclaimable: 307120 kB' 'Slab: 921204 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 614084 kB' 'KernelStack: 22112 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11133980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.704 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.705 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.706 nr_hugepages=1024 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.706 resv_hugepages=0 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.706 surplus_hugepages=0 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.706 anon_hugepages=0 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42736600 kB' 'MemAvailable: 44381860 kB' 'Buffers: 3748 kB' 'Cached: 11289848 kB' 'SwapCached: 20048 kB' 'Active: 7003660 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549080 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597540 kB' 'Mapped: 217648 kB' 'Shmem: 9154944 kB' 'KReclaimable: 307120 kB' 'Slab: 921204 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 614084 kB' 'KernelStack: 22112 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11132764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.706 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.707 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21111212 kB' 'MemUsed: 11527928 kB' 'SwapCached: 17412 kB' 'Active: 3691988 kB' 'Inactive: 4016092 kB' 'Active(anon): 3645516 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7287568 kB' 'Mapped: 134648 kB' 'AnonPages: 423744 kB' 'Shmem: 6420772 kB' 'KernelStack: 13032 kB' 'PageTables: 5388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 524432 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 326168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.708 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.709 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:52.710 node0=1024 expecting 1024 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:52.710 00:02:52.710 real 0m5.028s 00:02:52.710 user 0m1.392s 00:02:52.710 sys 0m2.177s 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:52.710 12:23:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:52.710 ************************************ 00:02:52.710 END TEST default_setup 00:02:52.710 ************************************ 00:02:52.710 12:23:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:52.710 12:23:37 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:52.710 12:23:37 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:52.710 12:23:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.710 ************************************ 00:02:52.710 START TEST per_node_1G_alloc 00:02:52.710 ************************************ 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.710 12:23:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:55.999 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:55.999 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42769116 kB' 'MemAvailable: 44414376 kB' 'Buffers: 3748 kB' 'Cached: 11289952 kB' 'SwapCached: 20048 kB' 'Active: 7003304 kB' 'Inactive: 4904208 kB' 'Active(anon): 6548724 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597112 kB' 'Mapped: 216536 kB' 'Shmem: 9155048 kB' 'KReclaimable: 307120 kB' 'Slab: 921064 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613944 kB' 'KernelStack: 22032 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11124740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.999 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.000 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42768860 kB' 'MemAvailable: 44414120 kB' 'Buffers: 3748 kB' 'Cached: 11289956 kB' 'SwapCached: 20048 kB' 'Active: 7002688 kB' 'Inactive: 4904208 kB' 'Active(anon): 6548108 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596600 kB' 'Mapped: 216504 kB' 'Shmem: 9155052 kB' 'KReclaimable: 307120 kB' 'Slab: 921088 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613968 kB' 'KernelStack: 22064 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11124760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216392 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.001 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.002 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42770944 kB' 'MemAvailable: 44416204 kB' 'Buffers: 3748 kB' 'Cached: 11289972 kB' 'SwapCached: 20048 kB' 'Active: 7002392 kB' 'Inactive: 4904208 kB' 'Active(anon): 6547812 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596244 kB' 'Mapped: 216504 kB' 'Shmem: 9155068 kB' 'KReclaimable: 307120 kB' 'Slab: 921088 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613968 kB' 'KernelStack: 22064 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11124780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216392 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.267 nr_hugepages=1024 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.267 resv_hugepages=0 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.267 surplus_hugepages=0 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.267 anon_hugepages=0 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42771680 kB' 'MemAvailable: 44416940 kB' 'Buffers: 3748 kB' 'Cached: 11289996 kB' 'SwapCached: 20048 kB' 'Active: 7002420 kB' 'Inactive: 4904208 kB' 'Active(anon): 6547840 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596244 kB' 'Mapped: 216504 kB' 'Shmem: 9155092 kB' 'KReclaimable: 307120 kB' 'Slab: 921088 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613968 kB' 'KernelStack: 22064 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11124804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216392 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.268 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22170140 kB' 'MemUsed: 10469000 kB' 'SwapCached: 17412 kB' 'Active: 3690320 kB' 'Inactive: 4016092 kB' 'Active(anon): 3643848 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7287696 kB' 'Mapped: 133512 kB' 'AnonPages: 421956 kB' 'Shmem: 6420900 kB' 'KernelStack: 13080 kB' 'PageTables: 5432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 524420 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 326156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.269 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20603736 kB' 'MemUsed: 7052344 kB' 'SwapCached: 2636 kB' 'Active: 3311708 kB' 'Inactive: 888116 kB' 'Active(anon): 2903600 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 881160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4026136 kB' 'Mapped: 82992 kB' 'AnonPages: 173864 kB' 'Shmem: 2734232 kB' 'KernelStack: 8968 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108856 kB' 'Slab: 396668 kB' 'SReclaimable: 108856 kB' 'SUnreclaim: 287812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.270 node0=512 expecting 512 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.270 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:56.271 node1=512 expecting 512 00:02:56.271 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:56.271 00:02:56.271 real 0m3.476s 00:02:56.271 user 0m1.310s 00:02:56.271 sys 0m2.230s 00:02:56.271 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:56.271 12:23:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.271 ************************************ 00:02:56.271 END TEST per_node_1G_alloc 00:02:56.271 ************************************ 00:02:56.271 12:23:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:56.271 12:23:40 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:56.271 12:23:40 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:56.271 12:23:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.271 ************************************ 00:02:56.271 START TEST even_2G_alloc 00:02:56.271 ************************************ 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.271 12:23:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:59.559 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.559 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42811044 kB' 'MemAvailable: 44456304 kB' 'Buffers: 3748 kB' 'Cached: 11290120 kB' 'SwapCached: 20048 kB' 'Active: 7004104 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549524 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597228 kB' 'Mapped: 216608 kB' 'Shmem: 9155216 kB' 'KReclaimable: 307120 kB' 'Slab: 920564 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613444 kB' 'KernelStack: 21984 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11125432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216376 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.823 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.824 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42812124 kB' 'MemAvailable: 44457384 kB' 'Buffers: 3748 kB' 'Cached: 11290124 kB' 'SwapCached: 20048 kB' 'Active: 7004008 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549428 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597592 kB' 'Mapped: 216512 kB' 'Shmem: 9155220 kB' 'KReclaimable: 307120 kB' 'Slab: 920524 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613404 kB' 'KernelStack: 21952 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11125452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216344 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.825 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.826 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42812784 kB' 'MemAvailable: 44458044 kB' 'Buffers: 3748 kB' 'Cached: 11290140 kB' 'SwapCached: 20048 kB' 'Active: 7003696 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549116 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597248 kB' 'Mapped: 216512 kB' 'Shmem: 9155236 kB' 'KReclaimable: 307120 kB' 'Slab: 920524 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613404 kB' 'KernelStack: 21952 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11125472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216344 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.827 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.828 nr_hugepages=1024 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.828 resv_hugepages=0 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.828 surplus_hugepages=0 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.828 anon_hugepages=0 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.828 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42814044 kB' 'MemAvailable: 44459304 kB' 'Buffers: 3748 kB' 'Cached: 11290164 kB' 'SwapCached: 20048 kB' 'Active: 7003780 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549200 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597356 kB' 'Mapped: 216512 kB' 'Shmem: 9155260 kB' 'KReclaimable: 307120 kB' 'Slab: 920524 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613404 kB' 'KernelStack: 21968 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11125128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216312 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.829 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.830 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22202156 kB' 'MemUsed: 10436984 kB' 'SwapCached: 17412 kB' 'Active: 3691776 kB' 'Inactive: 4016092 kB' 'Active(anon): 3645304 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7287784 kB' 'Mapped: 133520 kB' 'AnonPages: 423316 kB' 'Shmem: 6420988 kB' 'KernelStack: 13048 kB' 'PageTables: 5308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 523944 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 325680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.831 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20611748 kB' 'MemUsed: 7044332 kB' 'SwapCached: 2636 kB' 'Active: 3311672 kB' 'Inactive: 888116 kB' 'Active(anon): 2903564 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 881160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4026176 kB' 'Mapped: 82992 kB' 'AnonPages: 173632 kB' 'Shmem: 2734272 kB' 'KernelStack: 8856 kB' 'PageTables: 3084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108856 kB' 'Slab: 396580 kB' 'SReclaimable: 108856 kB' 'SUnreclaim: 287724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.832 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.833 node0=512 expecting 512 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:59.833 node1=512 expecting 512 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:59.833 00:02:59.833 real 0m3.605s 00:02:59.833 user 0m1.334s 00:02:59.833 sys 0m2.322s 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:59.833 12:23:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.833 ************************************ 00:02:59.833 END TEST even_2G_alloc 00:02:59.833 ************************************ 00:03:00.091 12:23:44 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:00.091 12:23:44 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:00.091 12:23:44 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:00.091 12:23:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.091 ************************************ 00:03:00.091 START TEST odd_alloc 00:03:00.091 ************************************ 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.091 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.092 12:23:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:03.380 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.380 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42842240 kB' 'MemAvailable: 44487500 kB' 'Buffers: 3748 kB' 'Cached: 11290292 kB' 'SwapCached: 20048 kB' 'Active: 7006300 kB' 'Inactive: 4904208 kB' 'Active(anon): 6551720 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599640 kB' 'Mapped: 216296 kB' 'Shmem: 9155388 kB' 'KReclaimable: 307120 kB' 'Slab: 920444 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613324 kB' 'KernelStack: 22080 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11127168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.380 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.381 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42842356 kB' 'MemAvailable: 44487616 kB' 'Buffers: 3748 kB' 'Cached: 11290296 kB' 'SwapCached: 20048 kB' 'Active: 7005192 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550612 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598576 kB' 'Mapped: 216216 kB' 'Shmem: 9155392 kB' 'KReclaimable: 307120 kB' 'Slab: 920372 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613252 kB' 'KernelStack: 22176 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11128884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.382 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.383 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42841148 kB' 'MemAvailable: 44486408 kB' 'Buffers: 3748 kB' 'Cached: 11290296 kB' 'SwapCached: 20048 kB' 'Active: 7005256 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550676 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598672 kB' 'Mapped: 216216 kB' 'Shmem: 9155392 kB' 'KReclaimable: 307120 kB' 'Slab: 920372 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613252 kB' 'KernelStack: 22128 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11128904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.384 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:03.385 nr_hugepages=1025 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.385 resv_hugepages=0 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.385 surplus_hugepages=0 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.385 anon_hugepages=0 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.385 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.386 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.386 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.386 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42841472 kB' 'MemAvailable: 44486732 kB' 'Buffers: 3748 kB' 'Cached: 11290296 kB' 'SwapCached: 20048 kB' 'Active: 7005568 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550988 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598984 kB' 'Mapped: 216216 kB' 'Shmem: 9155392 kB' 'KReclaimable: 307120 kB' 'Slab: 920372 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613252 kB' 'KernelStack: 22096 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11128924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:03.386 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.386 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.648 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.649 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22211172 kB' 'MemUsed: 10427968 kB' 'SwapCached: 17412 kB' 'Active: 3693144 kB' 'Inactive: 4016092 kB' 'Active(anon): 3646672 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7287948 kB' 'Mapped: 133516 kB' 'AnonPages: 424532 kB' 'Shmem: 6421152 kB' 'KernelStack: 13048 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 523980 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 325716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.650 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20631652 kB' 'MemUsed: 7024428 kB' 'SwapCached: 2636 kB' 'Active: 3312104 kB' 'Inactive: 888116 kB' 'Active(anon): 2903996 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 881160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4026204 kB' 'Mapped: 82700 kB' 'AnonPages: 174044 kB' 'Shmem: 2734300 kB' 'KernelStack: 9064 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108856 kB' 'Slab: 396392 kB' 'SReclaimable: 108856 kB' 'SUnreclaim: 287536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.651 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:03.652 node0=512 expecting 513 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:03.652 node1=513 expecting 512 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:03.652 00:03:03.652 real 0m3.570s 00:03:03.652 user 0m1.357s 00:03:03.652 sys 0m2.276s 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:03.652 12:23:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:03.652 ************************************ 00:03:03.652 END TEST odd_alloc 00:03:03.652 ************************************ 00:03:03.652 12:23:48 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:03.652 12:23:48 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:03.652 12:23:48 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:03.653 12:23:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.653 ************************************ 00:03:03.653 START TEST custom_alloc 00:03:03.653 ************************************ 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.653 12:23:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:06.940 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.940 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.203 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41817476 kB' 'MemAvailable: 43462736 kB' 'Buffers: 3748 kB' 'Cached: 11290456 kB' 'SwapCached: 20048 kB' 'Active: 7005004 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550424 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597960 kB' 'Mapped: 216648 kB' 'Shmem: 9155552 kB' 'KReclaimable: 307120 kB' 'Slab: 920324 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613204 kB' 'KernelStack: 22064 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11126636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.204 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41818908 kB' 'MemAvailable: 43464168 kB' 'Buffers: 3748 kB' 'Cached: 11290460 kB' 'SwapCached: 20048 kB' 'Active: 7004040 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549460 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597356 kB' 'Mapped: 216552 kB' 'Shmem: 9155556 kB' 'KReclaimable: 307120 kB' 'Slab: 920272 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613152 kB' 'KernelStack: 22032 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11126656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.205 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.206 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41818656 kB' 'MemAvailable: 43463916 kB' 'Buffers: 3748 kB' 'Cached: 11290476 kB' 'SwapCached: 20048 kB' 'Active: 7004020 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549440 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597364 kB' 'Mapped: 216552 kB' 'Shmem: 9155572 kB' 'KReclaimable: 307120 kB' 'Slab: 920272 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613152 kB' 'KernelStack: 22032 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11126676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.207 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.208 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:07.209 nr_hugepages=1536 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.209 resv_hugepages=0 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.209 surplus_hugepages=0 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.209 anon_hugepages=0 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41818540 kB' 'MemAvailable: 43463800 kB' 'Buffers: 3748 kB' 'Cached: 11290512 kB' 'SwapCached: 20048 kB' 'Active: 7004096 kB' 'Inactive: 4904208 kB' 'Active(anon): 6549516 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597376 kB' 'Mapped: 216552 kB' 'Shmem: 9155608 kB' 'KReclaimable: 307120 kB' 'Slab: 920268 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613148 kB' 'KernelStack: 22032 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11126696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.209 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.210 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22224500 kB' 'MemUsed: 10414640 kB' 'SwapCached: 17412 kB' 'Active: 3690384 kB' 'Inactive: 4016092 kB' 'Active(anon): 3643912 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7288088 kB' 'Mapped: 133560 kB' 'AnonPages: 421584 kB' 'Shmem: 6421292 kB' 'KernelStack: 13080 kB' 'PageTables: 5380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 523972 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 325708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.211 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 19595128 kB' 'MemUsed: 8060952 kB' 'SwapCached: 2636 kB' 'Active: 3313368 kB' 'Inactive: 888116 kB' 'Active(anon): 2905260 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 881160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4026244 kB' 'Mapped: 82992 kB' 'AnonPages: 175376 kB' 'Shmem: 2734340 kB' 'KernelStack: 8936 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108856 kB' 'Slab: 396296 kB' 'SReclaimable: 108856 kB' 'SUnreclaim: 287440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.212 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.213 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:07.214 node0=512 expecting 512 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:07.214 node1=1024 expecting 1024 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:07.214 00:03:07.214 real 0m3.642s 00:03:07.214 user 0m1.421s 00:03:07.214 sys 0m2.282s 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:07.214 12:23:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.214 ************************************ 00:03:07.214 END TEST custom_alloc 00:03:07.214 ************************************ 00:03:07.472 12:23:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:07.472 12:23:51 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:07.472 12:23:51 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:07.472 12:23:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.472 ************************************ 00:03:07.472 START TEST no_shrink_alloc 00:03:07.472 ************************************ 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:07.472 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:07.473 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.473 12:23:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:10.764 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.764 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42844776 kB' 'MemAvailable: 44490036 kB' 'Buffers: 3748 kB' 'Cached: 11290616 kB' 'SwapCached: 20048 kB' 'Active: 7006232 kB' 'Inactive: 4904208 kB' 'Active(anon): 6551652 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598912 kB' 'Mapped: 216648 kB' 'Shmem: 9155712 kB' 'KReclaimable: 307120 kB' 'Slab: 919912 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 612792 kB' 'KernelStack: 22016 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11129888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.764 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.765 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42845600 kB' 'MemAvailable: 44490860 kB' 'Buffers: 3748 kB' 'Cached: 11290616 kB' 'SwapCached: 20048 kB' 'Active: 7005376 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550796 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598420 kB' 'Mapped: 216552 kB' 'Shmem: 9155712 kB' 'KReclaimable: 307120 kB' 'Slab: 919892 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 612772 kB' 'KernelStack: 21952 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11127768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.766 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.767 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42845136 kB' 'MemAvailable: 44490396 kB' 'Buffers: 3748 kB' 'Cached: 11290636 kB' 'SwapCached: 20048 kB' 'Active: 7005360 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550780 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598364 kB' 'Mapped: 216552 kB' 'Shmem: 9155732 kB' 'KReclaimable: 307120 kB' 'Slab: 919892 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 612772 kB' 'KernelStack: 21968 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11127792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.768 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.769 nr_hugepages=1024 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.769 resv_hugepages=0 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.769 surplus_hugepages=0 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.769 anon_hugepages=0 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.769 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42843880 kB' 'MemAvailable: 44489140 kB' 'Buffers: 3748 kB' 'Cached: 11290656 kB' 'SwapCached: 20048 kB' 'Active: 7004976 kB' 'Inactive: 4904208 kB' 'Active(anon): 6550396 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597936 kB' 'Mapped: 216552 kB' 'Shmem: 9155752 kB' 'KReclaimable: 307120 kB' 'Slab: 919892 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 612772 kB' 'KernelStack: 21952 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11127812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.770 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.771 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21175984 kB' 'MemUsed: 11463156 kB' 'SwapCached: 17412 kB' 'Active: 3691976 kB' 'Inactive: 4016092 kB' 'Active(anon): 3645504 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7288180 kB' 'Mapped: 133560 kB' 'AnonPages: 423052 kB' 'Shmem: 6421384 kB' 'KernelStack: 13048 kB' 'PageTables: 5304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 523404 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 325140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.772 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.773 node0=1024 expecting 1024 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.773 12:23:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:13.303 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.303 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.566 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.566 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42832172 kB' 'MemAvailable: 44477432 kB' 'Buffers: 3748 kB' 'Cached: 11290748 kB' 'SwapCached: 20048 kB' 'Active: 7006636 kB' 'Inactive: 4904208 kB' 'Active(anon): 6552056 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599084 kB' 'Mapped: 216656 kB' 'Shmem: 9155844 kB' 'KReclaimable: 307120 kB' 'Slab: 920540 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613420 kB' 'KernelStack: 21936 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11128416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.566 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.567 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42833056 kB' 'MemAvailable: 44478316 kB' 'Buffers: 3748 kB' 'Cached: 11290764 kB' 'SwapCached: 20048 kB' 'Active: 7006580 kB' 'Inactive: 4904208 kB' 'Active(anon): 6552000 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599452 kB' 'Mapped: 216580 kB' 'Shmem: 9155860 kB' 'KReclaimable: 307120 kB' 'Slab: 920512 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613392 kB' 'KernelStack: 21952 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11128680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.568 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.569 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42833976 kB' 'MemAvailable: 44479236 kB' 'Buffers: 3748 kB' 'Cached: 11290784 kB' 'SwapCached: 20048 kB' 'Active: 7005984 kB' 'Inactive: 4904208 kB' 'Active(anon): 6551404 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598876 kB' 'Mapped: 216556 kB' 'Shmem: 9155880 kB' 'KReclaimable: 307120 kB' 'Slab: 920508 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613388 kB' 'KernelStack: 21920 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11128456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.570 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.571 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.572 nr_hugepages=1024 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.572 resv_hugepages=0 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.572 surplus_hugepages=0 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.572 anon_hugepages=0 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42834228 kB' 'MemAvailable: 44479488 kB' 'Buffers: 3748 kB' 'Cached: 11290824 kB' 'SwapCached: 20048 kB' 'Active: 7005652 kB' 'Inactive: 4904208 kB' 'Active(anon): 6551072 kB' 'Inactive(anon): 3220136 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598520 kB' 'Mapped: 216556 kB' 'Shmem: 9155920 kB' 'KReclaimable: 307120 kB' 'Slab: 920508 kB' 'SReclaimable: 307120 kB' 'SUnreclaim: 613388 kB' 'KernelStack: 21952 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11128476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 88704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3454324 kB' 'DirectMap2M: 51806208 kB' 'DirectMap1G: 13631488 kB' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.572 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.833 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.834 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21170268 kB' 'MemUsed: 11468872 kB' 'SwapCached: 17412 kB' 'Active: 3691640 kB' 'Inactive: 4016092 kB' 'Active(anon): 3645168 kB' 'Inactive(anon): 3213180 kB' 'Active(file): 46472 kB' 'Inactive(file): 802912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7288340 kB' 'Mapped: 133564 kB' 'AnonPages: 422520 kB' 'Shmem: 6421544 kB' 'KernelStack: 13000 kB' 'PageTables: 5156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198264 kB' 'Slab: 523928 kB' 'SReclaimable: 198264 kB' 'SUnreclaim: 325664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.835 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.836 node0=1024 expecting 1024 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.836 00:03:13.836 real 0m6.336s 00:03:13.836 user 0m2.208s 00:03:13.836 sys 0m4.148s 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:13.836 12:23:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.836 ************************************ 00:03:13.836 END TEST no_shrink_alloc 00:03:13.836 ************************************ 00:03:13.836 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:13.837 12:23:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:13.837 00:03:13.837 real 0m26.359s 00:03:13.837 user 0m9.255s 00:03:13.837 sys 0m15.929s 00:03:13.837 12:23:58 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:13.837 12:23:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.837 ************************************ 00:03:13.837 END TEST hugepages 00:03:13.837 ************************************ 00:03:13.837 12:23:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:13.837 12:23:58 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:13.837 12:23:58 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:13.837 12:23:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.837 ************************************ 00:03:13.837 START TEST driver 00:03:13.837 ************************************ 00:03:13.837 12:23:58 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:14.096 * Looking for test storage... 00:03:14.096 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:14.096 12:23:58 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:14.096 12:23:58 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.096 12:23:58 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.365 12:24:03 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:19.365 12:24:03 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:19.365 12:24:03 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:19.365 12:24:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:19.365 ************************************ 00:03:19.365 START TEST guess_driver 00:03:19.365 ************************************ 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:19.365 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:19.365 Looking for driver=vfio-pci 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.365 12:24:03 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.896 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.155 12:24:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.561 12:24:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:23.561 12:24:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:23.561 12:24:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.846 12:24:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:23.846 12:24:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:23.846 12:24:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.846 12:24:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.118 00:03:29.118 real 0m9.820s 00:03:29.118 user 0m2.502s 00:03:29.118 sys 0m4.962s 00:03:29.118 12:24:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:29.118 12:24:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.118 ************************************ 00:03:29.118 END TEST guess_driver 00:03:29.118 ************************************ 00:03:29.118 00:03:29.118 real 0m14.717s 00:03:29.118 user 0m3.874s 00:03:29.118 sys 0m7.717s 00:03:29.118 12:24:13 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:29.118 12:24:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.118 ************************************ 00:03:29.118 END TEST driver 00:03:29.118 ************************************ 00:03:29.118 12:24:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:29.118 12:24:13 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:29.118 12:24:13 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:29.118 12:24:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.118 ************************************ 00:03:29.118 START TEST devices 00:03:29.118 ************************************ 00:03:29.118 12:24:13 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:29.118 * Looking for test storage... 00:03:29.118 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:29.118 12:24:13 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:29.118 12:24:13 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:29.118 12:24:13 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.118 12:24:13 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:32.395 12:24:16 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:32.395 No valid GPT data, bailing 00:03:32.395 12:24:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:32.395 12:24:16 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:32.395 12:24:16 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:32.395 12:24:16 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:32.395 12:24:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:32.395 ************************************ 00:03:32.395 START TEST nvme_mount 00:03:32.395 ************************************ 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:32.395 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.396 12:24:16 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:33.769 Creating new GPT entries in memory. 00:03:33.769 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:33.769 other utilities. 00:03:33.769 12:24:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:33.769 12:24:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.769 12:24:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.769 12:24:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.769 12:24:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:34.703 Creating new GPT entries in memory. 00:03:34.703 The operation has completed successfully. 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2361795 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.703 12:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.231 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.490 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:37.490 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:37.490 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:37.490 12:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:37.490 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.490 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.748 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:37.748 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:37.748 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:37.748 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:37.748 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:37.748 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:37.748 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.748 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:37.748 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.007 12:24:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.288 12:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.567 12:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.567 12:24:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.568 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.568 00:03:44.568 real 0m12.095s 00:03:44.568 user 0m3.334s 00:03:44.568 sys 0m6.569s 00:03:44.568 12:24:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:44.568 12:24:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.568 ************************************ 00:03:44.568 END TEST nvme_mount 00:03:44.568 ************************************ 00:03:44.568 12:24:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:44.568 12:24:29 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:44.568 12:24:29 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:44.568 12:24:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.568 ************************************ 00:03:44.568 START TEST dm_mount 00:03:44.568 ************************************ 00:03:44.568 12:24:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:03:44.568 12:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:44.568 12:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:44.568 12:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:44.568 12:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.826 12:24:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:45.760 Creating new GPT entries in memory. 00:03:45.760 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.760 other utilities. 00:03:45.760 12:24:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.760 12:24:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.760 12:24:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.760 12:24:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.760 12:24:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.695 Creating new GPT entries in memory. 00:03:46.695 The operation has completed successfully. 00:03:46.695 12:24:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.695 12:24:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.695 12:24:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.695 12:24:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.695 12:24:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:48.069 The operation has completed successfully. 00:03:48.069 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.069 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.069 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2366218 00:03:48.069 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.070 12:24:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.356 12:24:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.884 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:54.141 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:54.399 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.399 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:54.399 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.399 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:54.399 12:24:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:54.399 00:03:54.399 real 0m9.605s 00:03:54.399 user 0m2.170s 00:03:54.399 sys 0m4.396s 00:03:54.399 12:24:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:54.399 12:24:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 ************************************ 00:03:54.399 END TEST dm_mount 00:03:54.399 ************************************ 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.399 12:24:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:54.657 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:54.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.657 12:24:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:54.657 00:03:54.657 real 0m25.931s 00:03:54.657 user 0m6.912s 00:03:54.657 sys 0m13.656s 00:03:54.657 12:24:39 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:54.657 12:24:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:54.657 ************************************ 00:03:54.657 END TEST devices 00:03:54.657 ************************************ 00:03:54.657 00:03:54.657 real 1m31.311s 00:03:54.657 user 0m27.708s 00:03:54.657 sys 0m52.035s 00:03:54.657 12:24:39 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:54.657 12:24:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.657 ************************************ 00:03:54.657 END TEST setup.sh 00:03:54.657 ************************************ 00:03:54.657 12:24:39 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:57.934 Hugepages 00:03:57.934 node hugesize free / total 00:03:57.934 node0 1048576kB 0 / 0 00:03:57.934 node0 2048kB 2048 / 2048 00:03:57.934 node1 1048576kB 0 / 0 00:03:57.934 node1 2048kB 0 / 0 00:03:57.934 00:03:57.934 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.934 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:57.934 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:58.193 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:58.193 12:24:42 -- spdk/autotest.sh@130 -- # uname -s 00:03:58.193 12:24:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:58.193 12:24:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:58.193 12:24:42 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:01.486 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.486 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.878 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.136 12:24:47 -- common/autotest_common.sh@1529 -- # sleep 1 00:04:04.069 12:24:48 -- common/autotest_common.sh@1530 -- # bdfs=() 00:04:04.069 12:24:48 -- common/autotest_common.sh@1530 -- # local bdfs 00:04:04.069 12:24:48 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.069 12:24:48 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:04:04.069 12:24:48 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:04.069 12:24:48 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:04.069 12:24:48 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.069 12:24:48 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:04.069 12:24:48 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:04.069 12:24:48 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:04.069 12:24:48 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:04.069 12:24:48 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.349 Waiting for block devices as requested 00:04:07.606 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:07.606 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:07.606 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:07.864 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:07.864 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:07.864 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:07.864 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:08.122 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:08.122 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:08.122 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:08.381 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:08.381 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:08.381 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:08.640 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:08.640 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:08.640 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:08.897 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:08.897 12:24:53 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:08.897 12:24:53 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1499 -- # grep 0000:d8:00.0/nvme/nvme 00:04:08.897 12:24:53 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:08.897 12:24:53 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:08.897 12:24:53 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:08.897 12:24:53 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:08.897 12:24:53 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:04:08.897 12:24:53 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:08.897 12:24:53 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:08.897 12:24:53 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:08.897 12:24:53 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:08.897 12:24:53 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:09.154 12:24:53 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:09.154 12:24:53 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:09.154 12:24:53 -- common/autotest_common.sh@1554 -- # continue 00:04:09.154 12:24:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:09.154 12:24:53 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:09.154 12:24:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.154 12:24:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:09.154 12:24:53 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:09.154 12:24:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.154 12:24:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:12.432 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.432 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.805 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.062 12:24:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:14.062 12:24:58 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:14.062 12:24:58 -- common/autotest_common.sh@10 -- # set +x 00:04:14.062 12:24:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:14.062 12:24:58 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:14.062 12:24:58 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.062 12:24:58 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:14.062 12:24:58 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:14.062 12:24:58 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:14.062 12:24:58 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:14.062 12:24:58 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:14.062 12:24:58 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.062 12:24:58 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.062 12:24:58 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:14.062 12:24:58 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:14.062 12:24:58 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:14.062 12:24:58 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:14.062 12:24:58 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:14.062 12:24:58 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:04:14.062 12:24:58 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:14.062 12:24:58 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:04:14.062 12:24:58 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:d8:00.0 00:04:14.062 12:24:58 -- common/autotest_common.sh@1589 -- # [[ -z 0000:d8:00.0 ]] 00:04:14.062 12:24:58 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=2375999 00:04:14.062 12:24:58 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.062 12:24:58 -- common/autotest_common.sh@1595 -- # waitforlisten 2375999 00:04:14.062 12:24:58 -- common/autotest_common.sh@828 -- # '[' -z 2375999 ']' 00:04:14.062 12:24:58 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.062 12:24:58 -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:14.062 12:24:58 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.062 12:24:58 -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:14.062 12:24:58 -- common/autotest_common.sh@10 -- # set +x 00:04:14.360 [2024-05-15 12:24:58.684555] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:14.360 [2024-05-15 12:24:58.684619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375999 ] 00:04:14.360 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.360 [2024-05-15 12:24:58.753466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.360 [2024-05-15 12:24:58.831262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.922 12:24:59 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:14.922 12:24:59 -- common/autotest_common.sh@861 -- # return 0 00:04:14.922 12:24:59 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:04:14.922 12:24:59 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:04:14.922 12:24:59 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:18.199 nvme0n1 00:04:18.199 12:25:02 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:18.199 [2024-05-15 12:25:02.666610] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:18.199 request: 00:04:18.199 { 00:04:18.199 "nvme_ctrlr_name": "nvme0", 00:04:18.199 "password": "test", 00:04:18.199 "method": "bdev_nvme_opal_revert", 00:04:18.199 "req_id": 1 00:04:18.200 } 00:04:18.200 Got JSON-RPC error response 00:04:18.200 response: 00:04:18.200 { 00:04:18.200 "code": -32602, 00:04:18.200 "message": "Invalid parameters" 00:04:18.200 } 00:04:18.200 12:25:02 -- common/autotest_common.sh@1601 -- # true 00:04:18.200 12:25:02 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:04:18.200 12:25:02 -- common/autotest_common.sh@1605 -- # killprocess 2375999 00:04:18.200 12:25:02 -- common/autotest_common.sh@947 -- # '[' -z 2375999 ']' 00:04:18.200 12:25:02 -- common/autotest_common.sh@951 -- # kill -0 2375999 00:04:18.200 12:25:02 -- common/autotest_common.sh@952 -- # uname 00:04:18.200 12:25:02 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:18.200 12:25:02 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2375999 00:04:18.200 12:25:02 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:18.200 12:25:02 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:18.200 12:25:02 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2375999' 00:04:18.200 killing process with pid 2375999 00:04:18.200 12:25:02 -- common/autotest_common.sh@966 -- # kill 2375999 00:04:18.200 12:25:02 -- common/autotest_common.sh@971 -- # wait 2375999 00:04:20.723 12:25:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:20.723 12:25:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:20.723 12:25:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:20.723 12:25:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:20.723 12:25:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:20.723 12:25:04 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:20.723 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:04:20.723 12:25:04 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:20.723 12:25:04 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:20.723 12:25:04 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:20.723 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:04:20.723 ************************************ 00:04:20.723 START TEST env 00:04:20.723 ************************************ 00:04:20.723 12:25:04 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:20.723 * Looking for test storage... 00:04:20.723 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:20.723 12:25:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.723 12:25:05 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:20.723 12:25:05 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:20.723 12:25:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.723 ************************************ 00:04:20.723 START TEST env_memory 00:04:20.723 ************************************ 00:04:20.723 12:25:05 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.723 00:04:20.723 00:04:20.723 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.723 http://cunit.sourceforge.net/ 00:04:20.723 00:04:20.723 00:04:20.723 Suite: memory 00:04:20.723 Test: alloc and free memory map ...[2024-05-15 12:25:05.144871] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.723 passed 00:04:20.723 Test: mem map translation ...[2024-05-15 12:25:05.157872] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.723 [2024-05-15 12:25:05.157889] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.723 [2024-05-15 12:25:05.157920] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.723 [2024-05-15 12:25:05.157930] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.723 passed 00:04:20.723 Test: mem map registration ...[2024-05-15 12:25:05.179598] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:20.723 [2024-05-15 12:25:05.179613] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:20.723 passed 00:04:20.723 Test: mem map adjacent registrations ...passed 00:04:20.723 00:04:20.723 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.723 suites 1 1 n/a 0 0 00:04:20.723 tests 4 4 4 0 0 00:04:20.723 asserts 152 152 152 0 n/a 00:04:20.723 00:04:20.723 Elapsed time = 0.086 seconds 00:04:20.723 00:04:20.723 real 0m0.099s 00:04:20.723 user 0m0.092s 00:04:20.723 sys 0m0.007s 00:04:20.723 12:25:05 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:20.723 12:25:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.723 ************************************ 00:04:20.723 END TEST env_memory 00:04:20.723 ************************************ 00:04:20.723 12:25:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.723 12:25:05 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:20.723 12:25:05 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:20.723 12:25:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.723 ************************************ 00:04:20.723 START TEST env_vtophys 00:04:20.723 ************************************ 00:04:20.723 12:25:05 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.723 EAL: lib.eal log level changed from notice to debug 00:04:20.723 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.723 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.723 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.723 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.723 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.723 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.723 EAL: Detected lcore 6 as core 6 on socket 0 00:04:20.723 EAL: Detected lcore 7 as core 8 on socket 0 00:04:20.723 EAL: Detected lcore 8 as core 9 on socket 0 00:04:20.723 EAL: Detected lcore 9 as core 10 on socket 0 00:04:20.723 EAL: Detected lcore 10 as core 11 on socket 0 00:04:20.723 EAL: Detected lcore 11 as core 12 on socket 0 00:04:20.723 EAL: Detected lcore 12 as core 13 on socket 0 00:04:20.723 EAL: Detected lcore 13 as core 14 on socket 0 00:04:20.723 EAL: Detected lcore 14 as core 16 on socket 0 00:04:20.723 EAL: Detected lcore 15 as core 17 on socket 0 00:04:20.723 EAL: Detected lcore 16 as core 18 on socket 0 00:04:20.723 EAL: Detected lcore 17 as core 19 on socket 0 00:04:20.723 EAL: Detected lcore 18 as core 20 on socket 0 00:04:20.723 EAL: Detected lcore 19 as core 21 on socket 0 00:04:20.723 EAL: Detected lcore 20 as core 22 on socket 0 00:04:20.723 EAL: Detected lcore 21 as core 24 on socket 0 00:04:20.723 EAL: Detected lcore 22 as core 25 on socket 0 00:04:20.723 EAL: Detected lcore 23 as core 26 on socket 0 00:04:20.723 EAL: Detected lcore 24 as core 27 on socket 0 00:04:20.723 EAL: Detected lcore 25 as core 28 on socket 0 00:04:20.723 EAL: Detected lcore 26 as core 29 on socket 0 00:04:20.723 EAL: Detected lcore 27 as core 30 on socket 0 00:04:20.723 EAL: Detected lcore 28 as core 0 on socket 1 00:04:20.723 EAL: Detected lcore 29 as core 1 on socket 1 00:04:20.723 EAL: Detected lcore 30 as core 2 on socket 1 00:04:20.723 EAL: Detected lcore 31 as core 3 on socket 1 00:04:20.723 EAL: Detected lcore 32 as core 4 on socket 1 00:04:20.723 EAL: Detected lcore 33 as core 5 on socket 1 00:04:20.723 EAL: Detected lcore 34 as core 6 on socket 1 00:04:20.723 EAL: Detected lcore 35 as core 8 on socket 1 00:04:20.723 EAL: Detected lcore 36 as core 9 on socket 1 00:04:20.723 EAL: Detected lcore 37 as core 10 on socket 1 00:04:20.723 EAL: Detected lcore 38 as core 11 on socket 1 00:04:20.723 EAL: Detected lcore 39 as core 12 on socket 1 00:04:20.723 EAL: Detected lcore 40 as core 13 on socket 1 00:04:20.723 EAL: Detected lcore 41 as core 14 on socket 1 00:04:20.723 EAL: Detected lcore 42 as core 16 on socket 1 00:04:20.723 EAL: Detected lcore 43 as core 17 on socket 1 00:04:20.723 EAL: Detected lcore 44 as core 18 on socket 1 00:04:20.723 EAL: Detected lcore 45 as core 19 on socket 1 00:04:20.723 EAL: Detected lcore 46 as core 20 on socket 1 00:04:20.723 EAL: Detected lcore 47 as core 21 on socket 1 00:04:20.723 EAL: Detected lcore 48 as core 22 on socket 1 00:04:20.723 EAL: Detected lcore 49 as core 24 on socket 1 00:04:20.723 EAL: Detected lcore 50 as core 25 on socket 1 00:04:20.723 EAL: Detected lcore 51 as core 26 on socket 1 00:04:20.723 EAL: Detected lcore 52 as core 27 on socket 1 00:04:20.723 EAL: Detected lcore 53 as core 28 on socket 1 00:04:20.723 EAL: Detected lcore 54 as core 29 on socket 1 00:04:20.723 EAL: Detected lcore 55 as core 30 on socket 1 00:04:20.723 EAL: Detected lcore 56 as core 0 on socket 0 00:04:20.723 EAL: Detected lcore 57 as core 1 on socket 0 00:04:20.723 EAL: Detected lcore 58 as core 2 on socket 0 00:04:20.723 EAL: Detected lcore 59 as core 3 on socket 0 00:04:20.723 EAL: Detected lcore 60 as core 4 on socket 0 00:04:20.723 EAL: Detected lcore 61 as core 5 on socket 0 00:04:20.723 EAL: Detected lcore 62 as core 6 on socket 0 00:04:20.723 EAL: Detected lcore 63 as core 8 on socket 0 00:04:20.723 EAL: Detected lcore 64 as core 9 on socket 0 00:04:20.723 EAL: Detected lcore 65 as core 10 on socket 0 00:04:20.723 EAL: Detected lcore 66 as core 11 on socket 0 00:04:20.723 EAL: Detected lcore 67 as core 12 on socket 0 00:04:20.723 EAL: Detected lcore 68 as core 13 on socket 0 00:04:20.723 EAL: Detected lcore 69 as core 14 on socket 0 00:04:20.723 EAL: Detected lcore 70 as core 16 on socket 0 00:04:20.723 EAL: Detected lcore 71 as core 17 on socket 0 00:04:20.723 EAL: Detected lcore 72 as core 18 on socket 0 00:04:20.724 EAL: Detected lcore 73 as core 19 on socket 0 00:04:20.724 EAL: Detected lcore 74 as core 20 on socket 0 00:04:20.724 EAL: Detected lcore 75 as core 21 on socket 0 00:04:20.724 EAL: Detected lcore 76 as core 22 on socket 0 00:04:20.724 EAL: Detected lcore 77 as core 24 on socket 0 00:04:20.724 EAL: Detected lcore 78 as core 25 on socket 0 00:04:20.724 EAL: Detected lcore 79 as core 26 on socket 0 00:04:20.724 EAL: Detected lcore 80 as core 27 on socket 0 00:04:20.724 EAL: Detected lcore 81 as core 28 on socket 0 00:04:20.724 EAL: Detected lcore 82 as core 29 on socket 0 00:04:20.724 EAL: Detected lcore 83 as core 30 on socket 0 00:04:20.724 EAL: Detected lcore 84 as core 0 on socket 1 00:04:20.724 EAL: Detected lcore 85 as core 1 on socket 1 00:04:20.724 EAL: Detected lcore 86 as core 2 on socket 1 00:04:20.724 EAL: Detected lcore 87 as core 3 on socket 1 00:04:20.724 EAL: Detected lcore 88 as core 4 on socket 1 00:04:20.724 EAL: Detected lcore 89 as core 5 on socket 1 00:04:20.724 EAL: Detected lcore 90 as core 6 on socket 1 00:04:20.724 EAL: Detected lcore 91 as core 8 on socket 1 00:04:20.724 EAL: Detected lcore 92 as core 9 on socket 1 00:04:20.724 EAL: Detected lcore 93 as core 10 on socket 1 00:04:20.724 EAL: Detected lcore 94 as core 11 on socket 1 00:04:20.724 EAL: Detected lcore 95 as core 12 on socket 1 00:04:20.724 EAL: Detected lcore 96 as core 13 on socket 1 00:04:20.724 EAL: Detected lcore 97 as core 14 on socket 1 00:04:20.724 EAL: Detected lcore 98 as core 16 on socket 1 00:04:20.724 EAL: Detected lcore 99 as core 17 on socket 1 00:04:20.724 EAL: Detected lcore 100 as core 18 on socket 1 00:04:20.724 EAL: Detected lcore 101 as core 19 on socket 1 00:04:20.724 EAL: Detected lcore 102 as core 20 on socket 1 00:04:20.724 EAL: Detected lcore 103 as core 21 on socket 1 00:04:20.724 EAL: Detected lcore 104 as core 22 on socket 1 00:04:20.724 EAL: Detected lcore 105 as core 24 on socket 1 00:04:20.724 EAL: Detected lcore 106 as core 25 on socket 1 00:04:20.724 EAL: Detected lcore 107 as core 26 on socket 1 00:04:20.724 EAL: Detected lcore 108 as core 27 on socket 1 00:04:20.724 EAL: Detected lcore 109 as core 28 on socket 1 00:04:20.724 EAL: Detected lcore 110 as core 29 on socket 1 00:04:20.724 EAL: Detected lcore 111 as core 30 on socket 1 00:04:20.724 EAL: Maximum logical cores by configuration: 128 00:04:20.724 EAL: Detected CPU lcores: 112 00:04:20.724 EAL: Detected NUMA nodes: 2 00:04:20.724 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:20.724 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:20.724 EAL: Checking presence of .so 'librte_eal.so' 00:04:20.724 EAL: Detected static linkage of DPDK 00:04:20.724 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.982 EAL: Bus pci wants IOVA as 'DC' 00:04:20.982 EAL: Buses did not request a specific IOVA mode. 00:04:20.982 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.982 EAL: Selected IOVA mode 'VA' 00:04:20.982 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.982 EAL: Probing VFIO support... 00:04:20.982 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.982 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.982 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.982 EAL: VFIO support initialized 00:04:20.982 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.982 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.982 EAL: Setting up physically contiguous memory... 00:04:20.982 EAL: Setting maximum number of open files to 524288 00:04:20.982 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.983 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.983 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.983 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.983 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.983 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.983 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.983 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.983 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.983 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.983 EAL: Hugepages will be freed exactly as allocated. 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: TSC frequency is ~2500000 KHz 00:04:20.983 EAL: Main lcore 0 is ready (tid=7f01e4143a00;cpuset=[0]) 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 0 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.983 00:04:20.983 00:04:20.983 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.983 http://cunit.sourceforge.net/ 00:04:20.983 00:04:20.983 00:04:20.983 Suite: components_suite 00:04:20.983 Test: vtophys_malloc_test ...passed 00:04:20.983 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.983 EAL: Trying to obtain current memory policy. 00:04:20.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.983 EAL: Restoring previous memory policy: 4 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.983 EAL: request: mp_malloc_sync 00:04:20.983 EAL: No shared files mode enabled, IPC is disabled 00:04:20.983 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.983 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.242 EAL: request: mp_malloc_sync 00:04:21.242 EAL: No shared files mode enabled, IPC is disabled 00:04:21.242 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.242 EAL: Trying to obtain current memory policy. 00:04:21.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.242 EAL: Restoring previous memory policy: 4 00:04:21.242 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.242 EAL: request: mp_malloc_sync 00:04:21.242 EAL: No shared files mode enabled, IPC is disabled 00:04:21.242 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.242 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.500 EAL: request: mp_malloc_sync 00:04:21.500 EAL: No shared files mode enabled, IPC is disabled 00:04:21.500 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.500 EAL: Trying to obtain current memory policy. 00:04:21.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.500 EAL: Restoring previous memory policy: 4 00:04:21.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.500 EAL: request: mp_malloc_sync 00:04:21.500 EAL: No shared files mode enabled, IPC is disabled 00:04:21.500 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.758 EAL: request: mp_malloc_sync 00:04:21.758 EAL: No shared files mode enabled, IPC is disabled 00:04:21.758 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.758 passed 00:04:21.758 00:04:21.758 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.758 suites 1 1 n/a 0 0 00:04:21.758 tests 2 2 2 0 0 00:04:21.758 asserts 497 497 497 0 n/a 00:04:21.758 00:04:21.758 Elapsed time = 0.962 seconds 00:04:21.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.025 EAL: request: mp_malloc_sync 00:04:22.025 EAL: No shared files mode enabled, IPC is disabled 00:04:22.025 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.025 EAL: No shared files mode enabled, IPC is disabled 00:04:22.025 EAL: No shared files mode enabled, IPC is disabled 00:04:22.025 EAL: No shared files mode enabled, IPC is disabled 00:04:22.025 00:04:22.025 real 0m1.083s 00:04:22.025 user 0m0.625s 00:04:22.025 sys 0m0.429s 00:04:22.025 12:25:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:22.025 12:25:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.025 ************************************ 00:04:22.025 END TEST env_vtophys 00:04:22.025 ************************************ 00:04:22.025 12:25:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.025 12:25:06 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:22.025 12:25:06 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:22.025 12:25:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.025 ************************************ 00:04:22.025 START TEST env_pci 00:04:22.025 ************************************ 00:04:22.025 12:25:06 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.025 00:04:22.025 00:04:22.025 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.025 http://cunit.sourceforge.net/ 00:04:22.025 00:04:22.025 00:04:22.025 Suite: pci 00:04:22.025 Test: pci_hook ...[2024-05-15 12:25:06.483013] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2377439 has claimed it 00:04:22.025 EAL: Cannot find device (10000:00:01.0) 00:04:22.025 EAL: Failed to attach device on primary process 00:04:22.025 passed 00:04:22.025 00:04:22.025 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.025 suites 1 1 n/a 0 0 00:04:22.025 tests 1 1 1 0 0 00:04:22.025 asserts 25 25 25 0 n/a 00:04:22.025 00:04:22.025 Elapsed time = 0.034 seconds 00:04:22.025 00:04:22.025 real 0m0.052s 00:04:22.025 user 0m0.013s 00:04:22.025 sys 0m0.039s 00:04:22.025 12:25:06 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:22.025 12:25:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.025 ************************************ 00:04:22.025 END TEST env_pci 00:04:22.025 ************************************ 00:04:22.025 12:25:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.025 12:25:06 env -- env/env.sh@15 -- # uname 00:04:22.025 12:25:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.025 12:25:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.025 12:25:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.025 12:25:06 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:22.025 12:25:06 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:22.025 12:25:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.025 ************************************ 00:04:22.025 START TEST env_dpdk_post_init 00:04:22.025 ************************************ 00:04:22.025 12:25:06 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.025 EAL: Detected CPU lcores: 112 00:04:22.025 EAL: Detected NUMA nodes: 2 00:04:22.283 EAL: Detected static linkage of DPDK 00:04:22.283 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.283 EAL: Selected IOVA mode 'VA' 00:04:22.283 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.283 EAL: VFIO support initialized 00:04:22.283 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.283 EAL: Using IOMMU type 1 (Type 1) 00:04:23.217 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:26.495 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:26.495 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:26.753 Starting DPDK initialization... 00:04:26.753 Starting SPDK post initialization... 00:04:26.753 SPDK NVMe probe 00:04:26.753 Attaching to 0000:d8:00.0 00:04:26.753 Attached to 0000:d8:00.0 00:04:26.753 Cleaning up... 00:04:26.753 00:04:26.753 real 0m4.667s 00:04:26.753 user 0m3.494s 00:04:26.753 sys 0m0.419s 00:04:26.753 12:25:11 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:26.753 12:25:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.753 ************************************ 00:04:26.753 END TEST env_dpdk_post_init 00:04:26.753 ************************************ 00:04:26.753 12:25:11 env -- env/env.sh@26 -- # uname 00:04:26.753 12:25:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.753 12:25:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.753 12:25:11 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:26.753 12:25:11 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:26.753 12:25:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.012 ************************************ 00:04:27.012 START TEST env_mem_callbacks 00:04:27.012 ************************************ 00:04:27.012 12:25:11 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.012 EAL: Detected CPU lcores: 112 00:04:27.012 EAL: Detected NUMA nodes: 2 00:04:27.012 EAL: Detected static linkage of DPDK 00:04:27.012 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.012 EAL: Selected IOVA mode 'VA' 00:04:27.012 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.012 EAL: VFIO support initialized 00:04:27.012 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.012 00:04:27.012 00:04:27.012 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.012 http://cunit.sourceforge.net/ 00:04:27.012 00:04:27.012 00:04:27.012 Suite: memory 00:04:27.012 Test: test ... 00:04:27.012 register 0x200000200000 2097152 00:04:27.012 malloc 3145728 00:04:27.012 register 0x200000400000 4194304 00:04:27.012 buf 0x200000500000 len 3145728 PASSED 00:04:27.012 malloc 64 00:04:27.012 buf 0x2000004fff40 len 64 PASSED 00:04:27.012 malloc 4194304 00:04:27.012 register 0x200000800000 6291456 00:04:27.012 buf 0x200000a00000 len 4194304 PASSED 00:04:27.012 free 0x200000500000 3145728 00:04:27.012 free 0x2000004fff40 64 00:04:27.012 unregister 0x200000400000 4194304 PASSED 00:04:27.012 free 0x200000a00000 4194304 00:04:27.012 unregister 0x200000800000 6291456 PASSED 00:04:27.012 malloc 8388608 00:04:27.012 register 0x200000400000 10485760 00:04:27.012 buf 0x200000600000 len 8388608 PASSED 00:04:27.012 free 0x200000600000 8388608 00:04:27.012 unregister 0x200000400000 10485760 PASSED 00:04:27.012 passed 00:04:27.012 00:04:27.012 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.012 suites 1 1 n/a 0 0 00:04:27.012 tests 1 1 1 0 0 00:04:27.012 asserts 15 15 15 0 n/a 00:04:27.012 00:04:27.012 Elapsed time = 0.006 seconds 00:04:27.012 00:04:27.012 real 0m0.067s 00:04:27.012 user 0m0.019s 00:04:27.012 sys 0m0.048s 00:04:27.012 12:25:11 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:27.012 12:25:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:27.012 ************************************ 00:04:27.012 END TEST env_mem_callbacks 00:04:27.012 ************************************ 00:04:27.012 00:04:27.012 real 0m6.524s 00:04:27.012 user 0m4.423s 00:04:27.012 sys 0m1.336s 00:04:27.012 12:25:11 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:27.012 12:25:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.012 ************************************ 00:04:27.012 END TEST env 00:04:27.012 ************************************ 00:04:27.012 12:25:11 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.012 12:25:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:27.012 12:25:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:27.012 12:25:11 -- common/autotest_common.sh@10 -- # set +x 00:04:27.012 ************************************ 00:04:27.012 START TEST rpc 00:04:27.012 ************************************ 00:04:27.012 12:25:11 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.270 * Looking for test storage... 00:04:27.270 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:27.270 12:25:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2378451 00:04:27.270 12:25:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.270 12:25:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:27.270 12:25:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2378451 00:04:27.270 12:25:11 rpc -- common/autotest_common.sh@828 -- # '[' -z 2378451 ']' 00:04:27.270 12:25:11 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.270 12:25:11 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:27.270 12:25:11 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.270 12:25:11 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:27.270 12:25:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.270 [2024-05-15 12:25:11.702466] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:27.270 [2024-05-15 12:25:11.702533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378451 ] 00:04:27.270 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.270 [2024-05-15 12:25:11.773962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.270 [2024-05-15 12:25:11.846902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.270 [2024-05-15 12:25:11.846942] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2378451' to capture a snapshot of events at runtime. 00:04:27.270 [2024-05-15 12:25:11.846952] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.270 [2024-05-15 12:25:11.846960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.270 [2024-05-15 12:25:11.846982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2378451 for offline analysis/debug. 00:04:27.270 [2024-05-15 12:25:11.847013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.204 12:25:12 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:28.204 12:25:12 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:28.204 12:25:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:28.204 12:25:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:28.204 12:25:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:28.204 12:25:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:28.204 12:25:12 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:28.204 12:25:12 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:28.204 12:25:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.204 ************************************ 00:04:28.204 START TEST rpc_integrity 00:04:28.204 ************************************ 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.204 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.204 { 00:04:28.204 "name": "Malloc0", 00:04:28.204 "aliases": [ 00:04:28.204 "68af33dd-2c79-4fa2-baa7-78d5106f5fd5" 00:04:28.204 ], 00:04:28.204 "product_name": "Malloc disk", 00:04:28.204 "block_size": 512, 00:04:28.204 "num_blocks": 16384, 00:04:28.204 "uuid": "68af33dd-2c79-4fa2-baa7-78d5106f5fd5", 00:04:28.204 "assigned_rate_limits": { 00:04:28.204 "rw_ios_per_sec": 0, 00:04:28.204 "rw_mbytes_per_sec": 0, 00:04:28.204 "r_mbytes_per_sec": 0, 00:04:28.204 "w_mbytes_per_sec": 0 00:04:28.204 }, 00:04:28.204 "claimed": false, 00:04:28.204 "zoned": false, 00:04:28.204 "supported_io_types": { 00:04:28.204 "read": true, 00:04:28.204 "write": true, 00:04:28.204 "unmap": true, 00:04:28.204 "write_zeroes": true, 00:04:28.204 "flush": true, 00:04:28.204 "reset": true, 00:04:28.204 "compare": false, 00:04:28.204 "compare_and_write": false, 00:04:28.204 "abort": true, 00:04:28.204 "nvme_admin": false, 00:04:28.204 "nvme_io": false 00:04:28.204 }, 00:04:28.204 "memory_domains": [ 00:04:28.204 { 00:04:28.204 "dma_device_id": "system", 00:04:28.204 "dma_device_type": 1 00:04:28.204 }, 00:04:28.204 { 00:04:28.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.204 "dma_device_type": 2 00:04:28.204 } 00:04:28.204 ], 00:04:28.204 "driver_specific": {} 00:04:28.204 } 00:04:28.204 ]' 00:04:28.204 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.205 [2024-05-15 12:25:12.713649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.205 [2024-05-15 12:25:12.713683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.205 [2024-05-15 12:25:12.713726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5975060 00:04:28.205 [2024-05-15 12:25:12.713735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.205 [2024-05-15 12:25:12.714545] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.205 [2024-05-15 12:25:12.714569] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.205 Passthru0 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.205 { 00:04:28.205 "name": "Malloc0", 00:04:28.205 "aliases": [ 00:04:28.205 "68af33dd-2c79-4fa2-baa7-78d5106f5fd5" 00:04:28.205 ], 00:04:28.205 "product_name": "Malloc disk", 00:04:28.205 "block_size": 512, 00:04:28.205 "num_blocks": 16384, 00:04:28.205 "uuid": "68af33dd-2c79-4fa2-baa7-78d5106f5fd5", 00:04:28.205 "assigned_rate_limits": { 00:04:28.205 "rw_ios_per_sec": 0, 00:04:28.205 "rw_mbytes_per_sec": 0, 00:04:28.205 "r_mbytes_per_sec": 0, 00:04:28.205 "w_mbytes_per_sec": 0 00:04:28.205 }, 00:04:28.205 "claimed": true, 00:04:28.205 "claim_type": "exclusive_write", 00:04:28.205 "zoned": false, 00:04:28.205 "supported_io_types": { 00:04:28.205 "read": true, 00:04:28.205 "write": true, 00:04:28.205 "unmap": true, 00:04:28.205 "write_zeroes": true, 00:04:28.205 "flush": true, 00:04:28.205 "reset": true, 00:04:28.205 "compare": false, 00:04:28.205 "compare_and_write": false, 00:04:28.205 "abort": true, 00:04:28.205 "nvme_admin": false, 00:04:28.205 "nvme_io": false 00:04:28.205 }, 00:04:28.205 "memory_domains": [ 00:04:28.205 { 00:04:28.205 "dma_device_id": "system", 00:04:28.205 "dma_device_type": 1 00:04:28.205 }, 00:04:28.205 { 00:04:28.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.205 "dma_device_type": 2 00:04:28.205 } 00:04:28.205 ], 00:04:28.205 "driver_specific": {} 00:04:28.205 }, 00:04:28.205 { 00:04:28.205 "name": "Passthru0", 00:04:28.205 "aliases": [ 00:04:28.205 "bc267b0d-4bd6-53f9-8cd3-42011451d331" 00:04:28.205 ], 00:04:28.205 "product_name": "passthru", 00:04:28.205 "block_size": 512, 00:04:28.205 "num_blocks": 16384, 00:04:28.205 "uuid": "bc267b0d-4bd6-53f9-8cd3-42011451d331", 00:04:28.205 "assigned_rate_limits": { 00:04:28.205 "rw_ios_per_sec": 0, 00:04:28.205 "rw_mbytes_per_sec": 0, 00:04:28.205 "r_mbytes_per_sec": 0, 00:04:28.205 "w_mbytes_per_sec": 0 00:04:28.205 }, 00:04:28.205 "claimed": false, 00:04:28.205 "zoned": false, 00:04:28.205 "supported_io_types": { 00:04:28.205 "read": true, 00:04:28.205 "write": true, 00:04:28.205 "unmap": true, 00:04:28.205 "write_zeroes": true, 00:04:28.205 "flush": true, 00:04:28.205 "reset": true, 00:04:28.205 "compare": false, 00:04:28.205 "compare_and_write": false, 00:04:28.205 "abort": true, 00:04:28.205 "nvme_admin": false, 00:04:28.205 "nvme_io": false 00:04:28.205 }, 00:04:28.205 "memory_domains": [ 00:04:28.205 { 00:04:28.205 "dma_device_id": "system", 00:04:28.205 "dma_device_type": 1 00:04:28.205 }, 00:04:28.205 { 00:04:28.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.205 "dma_device_type": 2 00:04:28.205 } 00:04:28.205 ], 00:04:28.205 "driver_specific": { 00:04:28.205 "passthru": { 00:04:28.205 "name": "Passthru0", 00:04:28.205 "base_bdev_name": "Malloc0" 00:04:28.205 } 00:04:28.205 } 00:04:28.205 } 00:04:28.205 ]' 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.205 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.205 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.463 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.463 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.463 12:25:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.463 00:04:28.463 real 0m0.293s 00:04:28.463 user 0m0.169s 00:04:28.463 sys 0m0.062s 00:04:28.463 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:28.463 12:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 ************************************ 00:04:28.463 END TEST rpc_integrity 00:04:28.463 ************************************ 00:04:28.463 12:25:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.463 12:25:12 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:28.463 12:25:12 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:28.463 12:25:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 ************************************ 00:04:28.463 START TEST rpc_plugins 00:04:28.463 ************************************ 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:28.463 12:25:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.463 12:25:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.463 12:25:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 12:25:12 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.463 12:25:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.463 { 00:04:28.463 "name": "Malloc1", 00:04:28.463 "aliases": [ 00:04:28.463 "1d49ab48-a6d9-4bcb-bd54-b4af7013efc4" 00:04:28.463 ], 00:04:28.463 "product_name": "Malloc disk", 00:04:28.463 "block_size": 4096, 00:04:28.463 "num_blocks": 256, 00:04:28.463 "uuid": "1d49ab48-a6d9-4bcb-bd54-b4af7013efc4", 00:04:28.463 "assigned_rate_limits": { 00:04:28.463 "rw_ios_per_sec": 0, 00:04:28.463 "rw_mbytes_per_sec": 0, 00:04:28.463 "r_mbytes_per_sec": 0, 00:04:28.463 "w_mbytes_per_sec": 0 00:04:28.463 }, 00:04:28.463 "claimed": false, 00:04:28.463 "zoned": false, 00:04:28.463 "supported_io_types": { 00:04:28.463 "read": true, 00:04:28.463 "write": true, 00:04:28.463 "unmap": true, 00:04:28.463 "write_zeroes": true, 00:04:28.463 "flush": true, 00:04:28.463 "reset": true, 00:04:28.463 "compare": false, 00:04:28.463 "compare_and_write": false, 00:04:28.463 "abort": true, 00:04:28.463 "nvme_admin": false, 00:04:28.463 "nvme_io": false 00:04:28.463 }, 00:04:28.463 "memory_domains": [ 00:04:28.463 { 00:04:28.463 "dma_device_id": "system", 00:04:28.463 "dma_device_type": 1 00:04:28.463 }, 00:04:28.463 { 00:04:28.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.463 "dma_device_type": 2 00:04:28.463 } 00:04:28.463 ], 00:04:28.463 "driver_specific": {} 00:04:28.463 } 00:04:28.463 ]' 00:04:28.463 12:25:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:28.463 12:25:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.463 12:25:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.463 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.463 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.463 12:25:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.463 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.463 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.463 12:25:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.463 12:25:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:28.719 12:25:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.719 00:04:28.719 real 0m0.142s 00:04:28.719 user 0m0.079s 00:04:28.719 sys 0m0.026s 00:04:28.719 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:28.719 12:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.719 ************************************ 00:04:28.719 END TEST rpc_plugins 00:04:28.719 ************************************ 00:04:28.719 12:25:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:28.719 12:25:13 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:28.719 12:25:13 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:28.719 12:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.719 ************************************ 00:04:28.719 START TEST rpc_trace_cmd_test 00:04:28.719 ************************************ 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:28.719 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2378451", 00:04:28.719 "tpoint_group_mask": "0x8", 00:04:28.719 "iscsi_conn": { 00:04:28.719 "mask": "0x2", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "scsi": { 00:04:28.719 "mask": "0x4", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "bdev": { 00:04:28.719 "mask": "0x8", 00:04:28.719 "tpoint_mask": "0xffffffffffffffff" 00:04:28.719 }, 00:04:28.719 "nvmf_rdma": { 00:04:28.719 "mask": "0x10", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "nvmf_tcp": { 00:04:28.719 "mask": "0x20", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "ftl": { 00:04:28.719 "mask": "0x40", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "blobfs": { 00:04:28.719 "mask": "0x80", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "dsa": { 00:04:28.719 "mask": "0x200", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "thread": { 00:04:28.719 "mask": "0x400", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "nvme_pcie": { 00:04:28.719 "mask": "0x800", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "iaa": { 00:04:28.719 "mask": "0x1000", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "nvme_tcp": { 00:04:28.719 "mask": "0x2000", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "bdev_nvme": { 00:04:28.719 "mask": "0x4000", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 }, 00:04:28.719 "sock": { 00:04:28.719 "mask": "0x8000", 00:04:28.719 "tpoint_mask": "0x0" 00:04:28.719 } 00:04:28.719 }' 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.719 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.976 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.976 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.976 12:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.976 00:04:28.976 real 0m0.228s 00:04:28.976 user 0m0.189s 00:04:28.976 sys 0m0.033s 00:04:28.976 12:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:28.976 12:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.976 ************************************ 00:04:28.976 END TEST rpc_trace_cmd_test 00:04:28.976 ************************************ 00:04:28.976 12:25:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.976 12:25:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.976 12:25:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.976 12:25:13 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:28.976 12:25:13 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:28.976 12:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.976 ************************************ 00:04:28.976 START TEST rpc_daemon_integrity 00:04:28.976 ************************************ 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.976 { 00:04:28.976 "name": "Malloc2", 00:04:28.976 "aliases": [ 00:04:28.976 "38b7df26-3b9e-4129-948f-1f590512049a" 00:04:28.976 ], 00:04:28.976 "product_name": "Malloc disk", 00:04:28.976 "block_size": 512, 00:04:28.976 "num_blocks": 16384, 00:04:28.976 "uuid": "38b7df26-3b9e-4129-948f-1f590512049a", 00:04:28.976 "assigned_rate_limits": { 00:04:28.976 "rw_ios_per_sec": 0, 00:04:28.976 "rw_mbytes_per_sec": 0, 00:04:28.976 "r_mbytes_per_sec": 0, 00:04:28.976 "w_mbytes_per_sec": 0 00:04:28.976 }, 00:04:28.976 "claimed": false, 00:04:28.976 "zoned": false, 00:04:28.976 "supported_io_types": { 00:04:28.976 "read": true, 00:04:28.976 "write": true, 00:04:28.976 "unmap": true, 00:04:28.976 "write_zeroes": true, 00:04:28.976 "flush": true, 00:04:28.976 "reset": true, 00:04:28.976 "compare": false, 00:04:28.976 "compare_and_write": false, 00:04:28.976 "abort": true, 00:04:28.976 "nvme_admin": false, 00:04:28.976 "nvme_io": false 00:04:28.976 }, 00:04:28.976 "memory_domains": [ 00:04:28.976 { 00:04:28.976 "dma_device_id": "system", 00:04:28.976 "dma_device_type": 1 00:04:28.976 }, 00:04:28.976 { 00:04:28.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.976 "dma_device_type": 2 00:04:28.976 } 00:04:28.976 ], 00:04:28.976 "driver_specific": {} 00:04:28.976 } 00:04:28.976 ]' 00:04:28.976 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.257 [2024-05-15 12:25:13.624142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.257 [2024-05-15 12:25:13.624176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.257 [2024-05-15 12:25:13.624193] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5976960 00:04:29.257 [2024-05-15 12:25:13.624202] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.257 [2024-05-15 12:25:13.624915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.257 [2024-05-15 12:25:13.624938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.257 Passthru0 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:29.257 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.257 { 00:04:29.257 "name": "Malloc2", 00:04:29.257 "aliases": [ 00:04:29.257 "38b7df26-3b9e-4129-948f-1f590512049a" 00:04:29.257 ], 00:04:29.257 "product_name": "Malloc disk", 00:04:29.257 "block_size": 512, 00:04:29.257 "num_blocks": 16384, 00:04:29.257 "uuid": "38b7df26-3b9e-4129-948f-1f590512049a", 00:04:29.257 "assigned_rate_limits": { 00:04:29.258 "rw_ios_per_sec": 0, 00:04:29.258 "rw_mbytes_per_sec": 0, 00:04:29.258 "r_mbytes_per_sec": 0, 00:04:29.258 "w_mbytes_per_sec": 0 00:04:29.258 }, 00:04:29.258 "claimed": true, 00:04:29.258 "claim_type": "exclusive_write", 00:04:29.258 "zoned": false, 00:04:29.258 "supported_io_types": { 00:04:29.258 "read": true, 00:04:29.258 "write": true, 00:04:29.258 "unmap": true, 00:04:29.258 "write_zeroes": true, 00:04:29.258 "flush": true, 00:04:29.258 "reset": true, 00:04:29.258 "compare": false, 00:04:29.258 "compare_and_write": false, 00:04:29.258 "abort": true, 00:04:29.258 "nvme_admin": false, 00:04:29.258 "nvme_io": false 00:04:29.258 }, 00:04:29.258 "memory_domains": [ 00:04:29.258 { 00:04:29.258 "dma_device_id": "system", 00:04:29.258 "dma_device_type": 1 00:04:29.258 }, 00:04:29.258 { 00:04:29.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.258 "dma_device_type": 2 00:04:29.258 } 00:04:29.258 ], 00:04:29.258 "driver_specific": {} 00:04:29.258 }, 00:04:29.258 { 00:04:29.258 "name": "Passthru0", 00:04:29.258 "aliases": [ 00:04:29.258 "a4dbe273-e736-5bf7-8a43-e9eac3102eed" 00:04:29.258 ], 00:04:29.258 "product_name": "passthru", 00:04:29.258 "block_size": 512, 00:04:29.258 "num_blocks": 16384, 00:04:29.258 "uuid": "a4dbe273-e736-5bf7-8a43-e9eac3102eed", 00:04:29.258 "assigned_rate_limits": { 00:04:29.258 "rw_ios_per_sec": 0, 00:04:29.258 "rw_mbytes_per_sec": 0, 00:04:29.258 "r_mbytes_per_sec": 0, 00:04:29.258 "w_mbytes_per_sec": 0 00:04:29.258 }, 00:04:29.258 "claimed": false, 00:04:29.258 "zoned": false, 00:04:29.258 "supported_io_types": { 00:04:29.258 "read": true, 00:04:29.258 "write": true, 00:04:29.258 "unmap": true, 00:04:29.258 "write_zeroes": true, 00:04:29.258 "flush": true, 00:04:29.258 "reset": true, 00:04:29.258 "compare": false, 00:04:29.258 "compare_and_write": false, 00:04:29.258 "abort": true, 00:04:29.258 "nvme_admin": false, 00:04:29.258 "nvme_io": false 00:04:29.258 }, 00:04:29.258 "memory_domains": [ 00:04:29.258 { 00:04:29.258 "dma_device_id": "system", 00:04:29.258 "dma_device_type": 1 00:04:29.258 }, 00:04:29.258 { 00:04:29.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.258 "dma_device_type": 2 00:04:29.258 } 00:04:29.258 ], 00:04:29.258 "driver_specific": { 00:04:29.258 "passthru": { 00:04:29.258 "name": "Passthru0", 00:04:29.258 "base_bdev_name": "Malloc2" 00:04:29.258 } 00:04:29.258 } 00:04:29.258 } 00:04:29.258 ]' 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.258 00:04:29.258 real 0m0.276s 00:04:29.258 user 0m0.171s 00:04:29.258 sys 0m0.045s 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:29.258 12:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 ************************************ 00:04:29.258 END TEST rpc_daemon_integrity 00:04:29.258 ************************************ 00:04:29.258 12:25:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:29.258 12:25:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2378451 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@947 -- # '[' -z 2378451 ']' 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@951 -- # kill -0 2378451 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@952 -- # uname 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2378451 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2378451' 00:04:29.258 killing process with pid 2378451 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@966 -- # kill 2378451 00:04:29.258 12:25:13 rpc -- common/autotest_common.sh@971 -- # wait 2378451 00:04:29.842 00:04:29.842 real 0m2.583s 00:04:29.842 user 0m3.267s 00:04:29.842 sys 0m0.814s 00:04:29.842 12:25:14 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:29.842 12:25:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.842 ************************************ 00:04:29.842 END TEST rpc 00:04:29.842 ************************************ 00:04:29.842 12:25:14 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.842 12:25:14 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:29.842 12:25:14 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:29.842 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.842 ************************************ 00:04:29.842 START TEST skip_rpc 00:04:29.842 ************************************ 00:04:29.842 12:25:14 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.842 * Looking for test storage... 00:04:29.842 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:29.842 12:25:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:29.842 12:25:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:29.842 12:25:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.842 12:25:14 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:29.842 12:25:14 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:29.842 12:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.842 ************************************ 00:04:29.842 START TEST skip_rpc 00:04:29.842 ************************************ 00:04:29.842 12:25:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:29.842 12:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2379159 00:04:29.842 12:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.842 12:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.842 12:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.842 [2024-05-15 12:25:14.422308] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:29.842 [2024-05-15 12:25:14.422371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379159 ] 00:04:29.843 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.100 [2024-05-15 12:25:14.490435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.100 [2024-05-15 12:25:14.561897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2379159 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 2379159 ']' 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 2379159 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2379159 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2379159' 00:04:35.358 killing process with pid 2379159 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 2379159 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 2379159 00:04:35.358 00:04:35.358 real 0m5.366s 00:04:35.358 user 0m5.119s 00:04:35.358 sys 0m0.283s 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:35.358 12:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.358 ************************************ 00:04:35.358 END TEST skip_rpc 00:04:35.358 ************************************ 00:04:35.358 12:25:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:35.358 12:25:19 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:35.358 12:25:19 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:35.358 12:25:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.358 ************************************ 00:04:35.358 START TEST skip_rpc_with_json 00:04:35.358 ************************************ 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2380000 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2380000 00:04:35.358 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 2380000 ']' 00:04:35.359 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.359 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:35.359 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.359 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:35.359 12:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.359 [2024-05-15 12:25:19.881160] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:35.359 [2024-05-15 12:25:19.881221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380000 ] 00:04:35.359 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.359 [2024-05-15 12:25:19.952699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.616 [2024-05-15 12:25:20.027354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.180 [2024-05-15 12:25:20.705543] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.180 request: 00:04:36.180 { 00:04:36.180 "trtype": "tcp", 00:04:36.180 "method": "nvmf_get_transports", 00:04:36.180 "req_id": 1 00:04:36.180 } 00:04:36.180 Got JSON-RPC error response 00:04:36.180 response: 00:04:36.180 { 00:04:36.180 "code": -19, 00:04:36.180 "message": "No such device" 00:04:36.180 } 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.180 [2024-05-15 12:25:20.717633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:36.180 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.439 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:36.439 12:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:36.439 { 00:04:36.439 "subsystems": [ 00:04:36.439 { 00:04:36.439 "subsystem": "scheduler", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "framework_set_scheduler", 00:04:36.439 "params": { 00:04:36.439 "name": "static" 00:04:36.439 } 00:04:36.439 } 00:04:36.439 ] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "vmd", 00:04:36.439 "config": [] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "sock", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "sock_impl_set_options", 00:04:36.439 "params": { 00:04:36.439 "impl_name": "posix", 00:04:36.439 "recv_buf_size": 2097152, 00:04:36.439 "send_buf_size": 2097152, 00:04:36.439 "enable_recv_pipe": true, 00:04:36.439 "enable_quickack": false, 00:04:36.439 "enable_placement_id": 0, 00:04:36.439 "enable_zerocopy_send_server": true, 00:04:36.439 "enable_zerocopy_send_client": false, 00:04:36.439 "zerocopy_threshold": 0, 00:04:36.439 "tls_version": 0, 00:04:36.439 "enable_ktls": false 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "sock_impl_set_options", 00:04:36.439 "params": { 00:04:36.439 "impl_name": "ssl", 00:04:36.439 "recv_buf_size": 4096, 00:04:36.439 "send_buf_size": 4096, 00:04:36.439 "enable_recv_pipe": true, 00:04:36.439 "enable_quickack": false, 00:04:36.439 "enable_placement_id": 0, 00:04:36.439 "enable_zerocopy_send_server": true, 00:04:36.439 "enable_zerocopy_send_client": false, 00:04:36.439 "zerocopy_threshold": 0, 00:04:36.439 "tls_version": 0, 00:04:36.439 "enable_ktls": false 00:04:36.439 } 00:04:36.439 } 00:04:36.439 ] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "iobuf", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "iobuf_set_options", 00:04:36.439 "params": { 00:04:36.439 "small_pool_count": 8192, 00:04:36.439 "large_pool_count": 1024, 00:04:36.439 "small_bufsize": 8192, 00:04:36.439 "large_bufsize": 135168 00:04:36.439 } 00:04:36.439 } 00:04:36.439 ] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "keyring", 00:04:36.439 "config": [] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "vfio_user_target", 00:04:36.439 "config": null 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "accel", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "accel_set_options", 00:04:36.439 "params": { 00:04:36.439 "small_cache_size": 128, 00:04:36.439 "large_cache_size": 16, 00:04:36.439 "task_count": 2048, 00:04:36.439 "sequence_count": 2048, 00:04:36.439 "buf_count": 2048 00:04:36.439 } 00:04:36.439 } 00:04:36.439 ] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "bdev", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "bdev_set_options", 00:04:36.439 "params": { 00:04:36.439 "bdev_io_pool_size": 65535, 00:04:36.439 "bdev_io_cache_size": 256, 00:04:36.439 "bdev_auto_examine": true, 00:04:36.439 "iobuf_small_cache_size": 128, 00:04:36.439 "iobuf_large_cache_size": 16 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "bdev_raid_set_options", 00:04:36.439 "params": { 00:04:36.439 "process_window_size_kb": 1024 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "bdev_nvme_set_options", 00:04:36.439 "params": { 00:04:36.439 "action_on_timeout": "none", 00:04:36.439 "timeout_us": 0, 00:04:36.439 "timeout_admin_us": 0, 00:04:36.439 "keep_alive_timeout_ms": 10000, 00:04:36.439 "arbitration_burst": 0, 00:04:36.439 "low_priority_weight": 0, 00:04:36.439 "medium_priority_weight": 0, 00:04:36.439 "high_priority_weight": 0, 00:04:36.439 "nvme_adminq_poll_period_us": 10000, 00:04:36.439 "nvme_ioq_poll_period_us": 0, 00:04:36.439 "io_queue_requests": 0, 00:04:36.439 "delay_cmd_submit": true, 00:04:36.439 "transport_retry_count": 4, 00:04:36.439 "bdev_retry_count": 3, 00:04:36.439 "transport_ack_timeout": 0, 00:04:36.439 "ctrlr_loss_timeout_sec": 0, 00:04:36.439 "reconnect_delay_sec": 0, 00:04:36.439 "fast_io_fail_timeout_sec": 0, 00:04:36.439 "disable_auto_failback": false, 00:04:36.439 "generate_uuids": false, 00:04:36.439 "transport_tos": 0, 00:04:36.439 "nvme_error_stat": false, 00:04:36.439 "rdma_srq_size": 0, 00:04:36.439 "io_path_stat": false, 00:04:36.439 "allow_accel_sequence": false, 00:04:36.439 "rdma_max_cq_size": 0, 00:04:36.439 "rdma_cm_event_timeout_ms": 0, 00:04:36.439 "dhchap_digests": [ 00:04:36.439 "sha256", 00:04:36.439 "sha384", 00:04:36.439 "sha512" 00:04:36.439 ], 00:04:36.439 "dhchap_dhgroups": [ 00:04:36.439 "null", 00:04:36.439 "ffdhe2048", 00:04:36.439 "ffdhe3072", 00:04:36.439 "ffdhe4096", 00:04:36.439 "ffdhe6144", 00:04:36.439 "ffdhe8192" 00:04:36.439 ] 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "bdev_nvme_set_hotplug", 00:04:36.439 "params": { 00:04:36.439 "period_us": 100000, 00:04:36.439 "enable": false 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "bdev_iscsi_set_options", 00:04:36.439 "params": { 00:04:36.439 "timeout_sec": 30 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "bdev_wait_for_examine" 00:04:36.439 } 00:04:36.439 ] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "nvmf", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "nvmf_set_config", 00:04:36.439 "params": { 00:04:36.439 "discovery_filter": "match_any", 00:04:36.439 "admin_cmd_passthru": { 00:04:36.439 "identify_ctrlr": false 00:04:36.439 } 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "nvmf_set_max_subsystems", 00:04:36.439 "params": { 00:04:36.439 "max_subsystems": 1024 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "nvmf_set_crdt", 00:04:36.439 "params": { 00:04:36.439 "crdt1": 0, 00:04:36.439 "crdt2": 0, 00:04:36.439 "crdt3": 0 00:04:36.439 } 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "method": "nvmf_create_transport", 00:04:36.439 "params": { 00:04:36.439 "trtype": "TCP", 00:04:36.439 "max_queue_depth": 128, 00:04:36.439 "max_io_qpairs_per_ctrlr": 127, 00:04:36.439 "in_capsule_data_size": 4096, 00:04:36.439 "max_io_size": 131072, 00:04:36.439 "io_unit_size": 131072, 00:04:36.439 "max_aq_depth": 128, 00:04:36.439 "num_shared_buffers": 511, 00:04:36.439 "buf_cache_size": 4294967295, 00:04:36.439 "dif_insert_or_strip": false, 00:04:36.439 "zcopy": false, 00:04:36.439 "c2h_success": true, 00:04:36.439 "sock_priority": 0, 00:04:36.439 "abort_timeout_sec": 1, 00:04:36.439 "ack_timeout": 0, 00:04:36.439 "data_wr_pool_size": 0 00:04:36.439 } 00:04:36.439 } 00:04:36.439 ] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "nbd", 00:04:36.439 "config": [] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "ublk", 00:04:36.439 "config": [] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "vhost_blk", 00:04:36.439 "config": [] 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "scsi", 00:04:36.439 "config": null 00:04:36.439 }, 00:04:36.439 { 00:04:36.439 "subsystem": "iscsi", 00:04:36.439 "config": [ 00:04:36.439 { 00:04:36.439 "method": "iscsi_set_options", 00:04:36.439 "params": { 00:04:36.439 "node_base": "iqn.2016-06.io.spdk", 00:04:36.439 "max_sessions": 128, 00:04:36.439 "max_connections_per_session": 2, 00:04:36.439 "max_queue_depth": 64, 00:04:36.439 "default_time2wait": 2, 00:04:36.439 "default_time2retain": 20, 00:04:36.439 "first_burst_length": 8192, 00:04:36.439 "immediate_data": true, 00:04:36.439 "allow_duplicated_isid": false, 00:04:36.439 "error_recovery_level": 0, 00:04:36.439 "nop_timeout": 60, 00:04:36.439 "nop_in_interval": 30, 00:04:36.439 "disable_chap": false, 00:04:36.439 "require_chap": false, 00:04:36.439 "mutual_chap": false, 00:04:36.439 "chap_group": 0, 00:04:36.439 "max_large_datain_per_connection": 64, 00:04:36.439 "max_r2t_per_connection": 4, 00:04:36.439 "pdu_pool_size": 36864, 00:04:36.440 "immediate_data_pool_size": 16384, 00:04:36.440 "data_out_pool_size": 2048 00:04:36.440 } 00:04:36.440 } 00:04:36.440 ] 00:04:36.440 }, 00:04:36.440 { 00:04:36.440 "subsystem": "vhost_scsi", 00:04:36.440 "config": [] 00:04:36.440 } 00:04:36.440 ] 00:04:36.440 } 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2380000 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2380000 ']' 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2380000 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2380000 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2380000' 00:04:36.440 killing process with pid 2380000 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2380000 00:04:36.440 12:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2380000 00:04:36.697 12:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:36.697 12:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2380278 00:04:36.697 12:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2380278 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2380278 ']' 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2380278 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2380278 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2380278' 00:04:41.951 killing process with pid 2380278 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2380278 00:04:41.951 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2380278 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:42.209 00:04:42.209 real 0m6.758s 00:04:42.209 user 0m6.549s 00:04:42.209 sys 0m0.644s 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.209 ************************************ 00:04:42.209 END TEST skip_rpc_with_json 00:04:42.209 ************************************ 00:04:42.209 12:25:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.209 12:25:26 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:42.209 12:25:26 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:42.209 12:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.209 ************************************ 00:04:42.209 START TEST skip_rpc_with_delay 00:04:42.209 ************************************ 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.209 [2024-05-15 12:25:26.711097] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.209 [2024-05-15 12:25:26.711175] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:42.209 00:04:42.209 real 0m0.028s 00:04:42.209 user 0m0.016s 00:04:42.209 sys 0m0.013s 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:42.209 12:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.209 ************************************ 00:04:42.209 END TEST skip_rpc_with_delay 00:04:42.209 ************************************ 00:04:42.209 12:25:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.209 12:25:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.209 12:25:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.209 12:25:26 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:42.209 12:25:26 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:42.209 12:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.209 ************************************ 00:04:42.209 START TEST exit_on_failed_rpc_init 00:04:42.209 ************************************ 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2381385 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2381385 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 2381385 ']' 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:42.209 12:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.467 [2024-05-15 12:25:26.842663] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:42.467 [2024-05-15 12:25:26.842720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381385 ] 00:04:42.467 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.467 [2024-05-15 12:25:26.909751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.467 [2024-05-15 12:25:26.981256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.399 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.399 [2024-05-15 12:25:27.685336] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:43.400 [2024-05-15 12:25:27.685418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381408 ] 00:04:43.400 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.400 [2024-05-15 12:25:27.756919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.400 [2024-05-15 12:25:27.831779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.400 [2024-05-15 12:25:27.831853] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.400 [2024-05-15 12:25:27.831865] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.400 [2024-05-15 12:25:27.831873] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2381385 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 2381385 ']' 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 2381385 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2381385 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2381385' 00:04:43.400 killing process with pid 2381385 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 2381385 00:04:43.400 12:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 2381385 00:04:43.658 00:04:43.658 real 0m1.440s 00:04:43.658 user 0m1.617s 00:04:43.658 sys 0m0.427s 00:04:43.658 12:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:43.658 12:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.658 ************************************ 00:04:43.658 END TEST exit_on_failed_rpc_init 00:04:43.658 ************************************ 00:04:43.990 12:25:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:43.990 00:04:43.990 real 0m14.053s 00:04:43.990 user 0m13.453s 00:04:43.990 sys 0m1.688s 00:04:43.990 12:25:28 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:43.990 12:25:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.990 ************************************ 00:04:43.990 END TEST skip_rpc 00:04:43.990 ************************************ 00:04:43.990 12:25:28 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.990 12:25:28 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:43.990 12:25:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:43.990 12:25:28 -- common/autotest_common.sh@10 -- # set +x 00:04:43.990 ************************************ 00:04:43.990 START TEST rpc_client 00:04:43.990 ************************************ 00:04:43.990 12:25:28 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.990 * Looking for test storage... 00:04:43.990 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:43.990 12:25:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:43.990 OK 00:04:43.990 12:25:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.990 00:04:43.990 real 0m0.130s 00:04:43.990 user 0m0.050s 00:04:43.990 sys 0m0.089s 00:04:43.990 12:25:28 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:43.990 12:25:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.990 ************************************ 00:04:43.990 END TEST rpc_client 00:04:43.990 ************************************ 00:04:43.990 12:25:28 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.990 12:25:28 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:43.990 12:25:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:43.990 12:25:28 -- common/autotest_common.sh@10 -- # set +x 00:04:43.990 ************************************ 00:04:43.990 START TEST json_config 00:04:43.990 ************************************ 00:04:43.990 12:25:28 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.248 12:25:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.248 12:25:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:44.248 12:25:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.248 12:25:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.248 12:25:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.248 12:25:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.248 12:25:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.249 12:25:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@47 -- # : 0 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:44.249 12:25:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.249 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.249 12:25:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.249 00:04:44.249 real 0m0.078s 00:04:44.249 user 0m0.035s 00:04:44.249 sys 0m0.043s 00:04:44.249 12:25:28 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:44.249 12:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.249 ************************************ 00:04:44.249 END TEST json_config 00:04:44.249 ************************************ 00:04:44.249 12:25:28 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.249 12:25:28 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:44.249 12:25:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:44.249 12:25:28 -- common/autotest_common.sh@10 -- # set +x 00:04:44.249 ************************************ 00:04:44.249 START TEST json_config_extra_key 00:04:44.249 ************************************ 00:04:44.249 12:25:28 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.249 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:44.249 12:25:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.249 12:25:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.249 12:25:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.249 12:25:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.249 12:25:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:44.249 12:25:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:44.249 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:44.249 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.249 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.249 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.249 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.507 INFO: launching applications... 00:04:44.507 12:25:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2381810 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.507 Waiting for target to run... 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2381810 /var/tmp/spdk_tgt.sock 00:04:44.507 12:25:28 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 2381810 ']' 00:04:44.507 12:25:28 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.507 12:25:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.507 12:25:28 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:44.507 12:25:28 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.507 12:25:28 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:44.507 12:25:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.507 [2024-05-15 12:25:28.893687] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:44.507 [2024-05-15 12:25:28.893777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381810 ] 00:04:44.507 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.765 [2024-05-15 12:25:29.332954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.021 [2024-05-15 12:25:29.425108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.279 12:25:29 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:45.279 12:25:29 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.279 00:04:45.279 12:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.279 INFO: shutting down applications... 00:04:45.279 12:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2381810 ]] 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2381810 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2381810 00:04:45.279 12:25:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2381810 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.844 12:25:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.844 SPDK target shutdown done 00:04:45.844 12:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:45.844 Success 00:04:45.844 00:04:45.844 real 0m1.448s 00:04:45.844 user 0m1.015s 00:04:45.844 sys 0m0.553s 00:04:45.844 12:25:30 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.844 12:25:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.844 ************************************ 00:04:45.844 END TEST json_config_extra_key 00:04:45.844 ************************************ 00:04:45.844 12:25:30 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.844 12:25:30 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.844 12:25:30 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.844 12:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:45.844 ************************************ 00:04:45.844 START TEST alias_rpc 00:04:45.844 ************************************ 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.844 * Looking for test storage... 00:04:45.844 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:45.844 12:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.844 12:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2382131 00:04:45.844 12:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2382131 00:04:45.844 12:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 2382131 ']' 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:45.844 12:25:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.844 [2024-05-15 12:25:30.428090] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:45.844 [2024-05-15 12:25:30.428164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382131 ] 00:04:46.103 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.103 [2024-05-15 12:25:30.497811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.103 [2024-05-15 12:25:30.573212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.667 12:25:31 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:46.667 12:25:31 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:46.667 12:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:46.924 12:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2382131 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 2382131 ']' 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 2382131 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2382131 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2382131' 00:04:46.924 killing process with pid 2382131 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@966 -- # kill 2382131 00:04:46.924 12:25:31 alias_rpc -- common/autotest_common.sh@971 -- # wait 2382131 00:04:47.181 00:04:47.181 real 0m1.501s 00:04:47.181 user 0m1.586s 00:04:47.181 sys 0m0.461s 00:04:47.181 12:25:31 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.181 12:25:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.181 ************************************ 00:04:47.181 END TEST alias_rpc 00:04:47.181 ************************************ 00:04:47.439 12:25:31 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:47.439 12:25:31 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.439 12:25:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:47.439 12:25:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:47.439 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:04:47.439 ************************************ 00:04:47.439 START TEST spdkcli_tcp 00:04:47.439 ************************************ 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.439 * Looking for test storage... 00:04:47.439 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2382452 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2382452 00:04:47.439 12:25:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 2382452 ']' 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:47.439 12:25:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.439 [2024-05-15 12:25:32.013154] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:47.439 [2024-05-15 12:25:32.013225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382452 ] 00:04:47.439 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.696 [2024-05-15 12:25:32.081668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.696 [2024-05-15 12:25:32.154927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.697 [2024-05-15 12:25:32.154929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.260 12:25:32 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:48.260 12:25:32 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:04:48.260 12:25:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2382713 00:04:48.260 12:25:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.260 12:25:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.517 [ 00:04:48.518 "spdk_get_version", 00:04:48.518 "rpc_get_methods", 00:04:48.518 "trace_get_info", 00:04:48.518 "trace_get_tpoint_group_mask", 00:04:48.518 "trace_disable_tpoint_group", 00:04:48.518 "trace_enable_tpoint_group", 00:04:48.518 "trace_clear_tpoint_mask", 00:04:48.518 "trace_set_tpoint_mask", 00:04:48.518 "vfu_tgt_set_base_path", 00:04:48.518 "framework_get_pci_devices", 00:04:48.518 "framework_get_config", 00:04:48.518 "framework_get_subsystems", 00:04:48.518 "keyring_get_keys", 00:04:48.518 "iobuf_get_stats", 00:04:48.518 "iobuf_set_options", 00:04:48.518 "sock_get_default_impl", 00:04:48.518 "sock_set_default_impl", 00:04:48.518 "sock_impl_set_options", 00:04:48.518 "sock_impl_get_options", 00:04:48.518 "vmd_rescan", 00:04:48.518 "vmd_remove_device", 00:04:48.518 "vmd_enable", 00:04:48.518 "accel_get_stats", 00:04:48.518 "accel_set_options", 00:04:48.518 "accel_set_driver", 00:04:48.518 "accel_crypto_key_destroy", 00:04:48.518 "accel_crypto_keys_get", 00:04:48.518 "accel_crypto_key_create", 00:04:48.518 "accel_assign_opc", 00:04:48.518 "accel_get_module_info", 00:04:48.518 "accel_get_opc_assignments", 00:04:48.518 "notify_get_notifications", 00:04:48.518 "notify_get_types", 00:04:48.518 "bdev_get_histogram", 00:04:48.518 "bdev_enable_histogram", 00:04:48.518 "bdev_set_qos_limit", 00:04:48.518 "bdev_set_qd_sampling_period", 00:04:48.518 "bdev_get_bdevs", 00:04:48.518 "bdev_reset_iostat", 00:04:48.518 "bdev_get_iostat", 00:04:48.518 "bdev_examine", 00:04:48.518 "bdev_wait_for_examine", 00:04:48.518 "bdev_set_options", 00:04:48.518 "scsi_get_devices", 00:04:48.518 "thread_set_cpumask", 00:04:48.518 "framework_get_scheduler", 00:04:48.518 "framework_set_scheduler", 00:04:48.518 "framework_get_reactors", 00:04:48.518 "thread_get_io_channels", 00:04:48.518 "thread_get_pollers", 00:04:48.518 "thread_get_stats", 00:04:48.518 "framework_monitor_context_switch", 00:04:48.518 "spdk_kill_instance", 00:04:48.518 "log_enable_timestamps", 00:04:48.518 "log_get_flags", 00:04:48.518 "log_clear_flag", 00:04:48.518 "log_set_flag", 00:04:48.518 "log_get_level", 00:04:48.518 "log_set_level", 00:04:48.518 "log_get_print_level", 00:04:48.518 "log_set_print_level", 00:04:48.518 "framework_enable_cpumask_locks", 00:04:48.518 "framework_disable_cpumask_locks", 00:04:48.518 "framework_wait_init", 00:04:48.518 "framework_start_init", 00:04:48.518 "virtio_blk_create_transport", 00:04:48.518 "virtio_blk_get_transports", 00:04:48.518 "vhost_controller_set_coalescing", 00:04:48.518 "vhost_get_controllers", 00:04:48.518 "vhost_delete_controller", 00:04:48.518 "vhost_create_blk_controller", 00:04:48.518 "vhost_scsi_controller_remove_target", 00:04:48.518 "vhost_scsi_controller_add_target", 00:04:48.518 "vhost_start_scsi_controller", 00:04:48.518 "vhost_create_scsi_controller", 00:04:48.518 "ublk_recover_disk", 00:04:48.518 "ublk_get_disks", 00:04:48.518 "ublk_stop_disk", 00:04:48.518 "ublk_start_disk", 00:04:48.518 "ublk_destroy_target", 00:04:48.518 "ublk_create_target", 00:04:48.518 "nbd_get_disks", 00:04:48.518 "nbd_stop_disk", 00:04:48.518 "nbd_start_disk", 00:04:48.518 "env_dpdk_get_mem_stats", 00:04:48.518 "nvmf_stop_mdns_prr", 00:04:48.518 "nvmf_publish_mdns_prr", 00:04:48.518 "nvmf_subsystem_get_listeners", 00:04:48.518 "nvmf_subsystem_get_qpairs", 00:04:48.518 "nvmf_subsystem_get_controllers", 00:04:48.518 "nvmf_get_stats", 00:04:48.518 "nvmf_get_transports", 00:04:48.518 "nvmf_create_transport", 00:04:48.518 "nvmf_get_targets", 00:04:48.518 "nvmf_delete_target", 00:04:48.518 "nvmf_create_target", 00:04:48.518 "nvmf_subsystem_allow_any_host", 00:04:48.518 "nvmf_subsystem_remove_host", 00:04:48.518 "nvmf_subsystem_add_host", 00:04:48.518 "nvmf_ns_remove_host", 00:04:48.518 "nvmf_ns_add_host", 00:04:48.518 "nvmf_subsystem_remove_ns", 00:04:48.518 "nvmf_subsystem_add_ns", 00:04:48.518 "nvmf_subsystem_listener_set_ana_state", 00:04:48.518 "nvmf_discovery_get_referrals", 00:04:48.518 "nvmf_discovery_remove_referral", 00:04:48.518 "nvmf_discovery_add_referral", 00:04:48.518 "nvmf_subsystem_remove_listener", 00:04:48.518 "nvmf_subsystem_add_listener", 00:04:48.518 "nvmf_delete_subsystem", 00:04:48.518 "nvmf_create_subsystem", 00:04:48.518 "nvmf_get_subsystems", 00:04:48.518 "nvmf_set_crdt", 00:04:48.518 "nvmf_set_config", 00:04:48.518 "nvmf_set_max_subsystems", 00:04:48.518 "iscsi_get_histogram", 00:04:48.518 "iscsi_enable_histogram", 00:04:48.518 "iscsi_set_options", 00:04:48.518 "iscsi_get_auth_groups", 00:04:48.518 "iscsi_auth_group_remove_secret", 00:04:48.518 "iscsi_auth_group_add_secret", 00:04:48.518 "iscsi_delete_auth_group", 00:04:48.518 "iscsi_create_auth_group", 00:04:48.518 "iscsi_set_discovery_auth", 00:04:48.518 "iscsi_get_options", 00:04:48.518 "iscsi_target_node_request_logout", 00:04:48.518 "iscsi_target_node_set_redirect", 00:04:48.518 "iscsi_target_node_set_auth", 00:04:48.518 "iscsi_target_node_add_lun", 00:04:48.518 "iscsi_get_stats", 00:04:48.518 "iscsi_get_connections", 00:04:48.518 "iscsi_portal_group_set_auth", 00:04:48.518 "iscsi_start_portal_group", 00:04:48.518 "iscsi_delete_portal_group", 00:04:48.518 "iscsi_create_portal_group", 00:04:48.518 "iscsi_get_portal_groups", 00:04:48.518 "iscsi_delete_target_node", 00:04:48.518 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.518 "iscsi_target_node_add_pg_ig_maps", 00:04:48.518 "iscsi_create_target_node", 00:04:48.518 "iscsi_get_target_nodes", 00:04:48.518 "iscsi_delete_initiator_group", 00:04:48.518 "iscsi_initiator_group_remove_initiators", 00:04:48.518 "iscsi_initiator_group_add_initiators", 00:04:48.518 "iscsi_create_initiator_group", 00:04:48.518 "iscsi_get_initiator_groups", 00:04:48.518 "keyring_file_remove_key", 00:04:48.518 "keyring_file_add_key", 00:04:48.518 "vfu_virtio_create_scsi_endpoint", 00:04:48.518 "vfu_virtio_scsi_remove_target", 00:04:48.518 "vfu_virtio_scsi_add_target", 00:04:48.518 "vfu_virtio_create_blk_endpoint", 00:04:48.518 "vfu_virtio_delete_endpoint", 00:04:48.518 "iaa_scan_accel_module", 00:04:48.518 "dsa_scan_accel_module", 00:04:48.518 "ioat_scan_accel_module", 00:04:48.518 "accel_error_inject_error", 00:04:48.518 "bdev_iscsi_delete", 00:04:48.518 "bdev_iscsi_create", 00:04:48.518 "bdev_iscsi_set_options", 00:04:48.518 "bdev_virtio_attach_controller", 00:04:48.518 "bdev_virtio_scsi_get_devices", 00:04:48.518 "bdev_virtio_detach_controller", 00:04:48.518 "bdev_virtio_blk_set_hotplug", 00:04:48.518 "bdev_ftl_set_property", 00:04:48.518 "bdev_ftl_get_properties", 00:04:48.518 "bdev_ftl_get_stats", 00:04:48.518 "bdev_ftl_unmap", 00:04:48.518 "bdev_ftl_unload", 00:04:48.518 "bdev_ftl_delete", 00:04:48.518 "bdev_ftl_load", 00:04:48.518 "bdev_ftl_create", 00:04:48.518 "bdev_aio_delete", 00:04:48.518 "bdev_aio_rescan", 00:04:48.518 "bdev_aio_create", 00:04:48.518 "blobfs_create", 00:04:48.518 "blobfs_detect", 00:04:48.518 "blobfs_set_cache_size", 00:04:48.518 "bdev_zone_block_delete", 00:04:48.518 "bdev_zone_block_create", 00:04:48.518 "bdev_delay_delete", 00:04:48.518 "bdev_delay_create", 00:04:48.518 "bdev_delay_update_latency", 00:04:48.518 "bdev_split_delete", 00:04:48.518 "bdev_split_create", 00:04:48.518 "bdev_error_inject_error", 00:04:48.518 "bdev_error_delete", 00:04:48.518 "bdev_error_create", 00:04:48.518 "bdev_raid_set_options", 00:04:48.518 "bdev_raid_remove_base_bdev", 00:04:48.518 "bdev_raid_add_base_bdev", 00:04:48.518 "bdev_raid_delete", 00:04:48.518 "bdev_raid_create", 00:04:48.518 "bdev_raid_get_bdevs", 00:04:48.518 "bdev_lvol_check_shallow_copy", 00:04:48.518 "bdev_lvol_start_shallow_copy", 00:04:48.518 "bdev_lvol_grow_lvstore", 00:04:48.518 "bdev_lvol_get_lvols", 00:04:48.518 "bdev_lvol_get_lvstores", 00:04:48.518 "bdev_lvol_delete", 00:04:48.518 "bdev_lvol_set_read_only", 00:04:48.518 "bdev_lvol_resize", 00:04:48.518 "bdev_lvol_decouple_parent", 00:04:48.518 "bdev_lvol_inflate", 00:04:48.518 "bdev_lvol_rename", 00:04:48.518 "bdev_lvol_clone_bdev", 00:04:48.518 "bdev_lvol_clone", 00:04:48.518 "bdev_lvol_snapshot", 00:04:48.518 "bdev_lvol_create", 00:04:48.518 "bdev_lvol_delete_lvstore", 00:04:48.518 "bdev_lvol_rename_lvstore", 00:04:48.518 "bdev_lvol_create_lvstore", 00:04:48.518 "bdev_passthru_delete", 00:04:48.518 "bdev_passthru_create", 00:04:48.518 "bdev_nvme_cuse_unregister", 00:04:48.518 "bdev_nvme_cuse_register", 00:04:48.518 "bdev_opal_new_user", 00:04:48.518 "bdev_opal_set_lock_state", 00:04:48.518 "bdev_opal_delete", 00:04:48.518 "bdev_opal_get_info", 00:04:48.518 "bdev_opal_create", 00:04:48.518 "bdev_nvme_opal_revert", 00:04:48.518 "bdev_nvme_opal_init", 00:04:48.518 "bdev_nvme_send_cmd", 00:04:48.518 "bdev_nvme_get_path_iostat", 00:04:48.518 "bdev_nvme_get_mdns_discovery_info", 00:04:48.518 "bdev_nvme_stop_mdns_discovery", 00:04:48.518 "bdev_nvme_start_mdns_discovery", 00:04:48.518 "bdev_nvme_set_multipath_policy", 00:04:48.518 "bdev_nvme_set_preferred_path", 00:04:48.518 "bdev_nvme_get_io_paths", 00:04:48.518 "bdev_nvme_remove_error_injection", 00:04:48.518 "bdev_nvme_add_error_injection", 00:04:48.518 "bdev_nvme_get_discovery_info", 00:04:48.518 "bdev_nvme_stop_discovery", 00:04:48.518 "bdev_nvme_start_discovery", 00:04:48.518 "bdev_nvme_get_controller_health_info", 00:04:48.518 "bdev_nvme_disable_controller", 00:04:48.518 "bdev_nvme_enable_controller", 00:04:48.518 "bdev_nvme_reset_controller", 00:04:48.518 "bdev_nvme_get_transport_statistics", 00:04:48.519 "bdev_nvme_apply_firmware", 00:04:48.519 "bdev_nvme_detach_controller", 00:04:48.519 "bdev_nvme_get_controllers", 00:04:48.519 "bdev_nvme_attach_controller", 00:04:48.519 "bdev_nvme_set_hotplug", 00:04:48.519 "bdev_nvme_set_options", 00:04:48.519 "bdev_null_resize", 00:04:48.519 "bdev_null_delete", 00:04:48.519 "bdev_null_create", 00:04:48.519 "bdev_malloc_delete", 00:04:48.519 "bdev_malloc_create" 00:04:48.519 ] 00:04:48.519 12:25:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.519 12:25:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.519 12:25:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2382452 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 2382452 ']' 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 2382452 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2382452 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2382452' 00:04:48.519 killing process with pid 2382452 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 2382452 00:04:48.519 12:25:33 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 2382452 00:04:49.081 00:04:49.081 real 0m1.523s 00:04:49.081 user 0m2.797s 00:04:49.081 sys 0m0.489s 00:04:49.081 12:25:33 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:49.081 12:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.081 ************************************ 00:04:49.081 END TEST spdkcli_tcp 00:04:49.081 ************************************ 00:04:49.081 12:25:33 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.081 12:25:33 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:49.081 12:25:33 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:49.081 12:25:33 -- common/autotest_common.sh@10 -- # set +x 00:04:49.081 ************************************ 00:04:49.081 START TEST dpdk_mem_utility 00:04:49.081 ************************************ 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.081 * Looking for test storage... 00:04:49.081 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:49.081 12:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.081 12:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2382790 00:04:49.081 12:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2382790 00:04:49.081 12:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 2382790 ']' 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:49.081 12:25:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.081 [2024-05-15 12:25:33.625459] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:49.081 [2024-05-15 12:25:33.625531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382790 ] 00:04:49.081 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.081 [2024-05-15 12:25:33.695677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.338 [2024-05-15 12:25:33.774541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.902 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:49.902 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:04:49.902 12:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.902 12:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.902 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:49.902 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.902 { 00:04:49.902 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.902 } 00:04:49.902 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:49.902 12:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.902 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:49.902 1 heaps totaling size 814.000000 MiB 00:04:49.902 size: 814.000000 MiB heap id: 0 00:04:49.902 end heaps---------- 00:04:49.902 8 mempools totaling size 598.116089 MiB 00:04:49.902 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.902 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.902 size: 84.521057 MiB name: bdev_io_2382790 00:04:49.902 size: 51.011292 MiB name: evtpool_2382790 00:04:49.902 size: 50.003479 MiB name: msgpool_2382790 00:04:49.902 size: 21.763794 MiB name: PDU_Pool 00:04:49.902 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.902 size: 0.026123 MiB name: Session_Pool 00:04:49.902 end mempools------- 00:04:49.902 6 memzones totaling size 4.142822 MiB 00:04:49.902 size: 1.000366 MiB name: RG_ring_0_2382790 00:04:49.902 size: 1.000366 MiB name: RG_ring_1_2382790 00:04:49.902 size: 1.000366 MiB name: RG_ring_4_2382790 00:04:49.902 size: 1.000366 MiB name: RG_ring_5_2382790 00:04:49.902 size: 0.125366 MiB name: RG_ring_2_2382790 00:04:49.902 size: 0.015991 MiB name: RG_ring_3_2382790 00:04:49.902 end memzones------- 00:04:49.902 12:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.161 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:50.161 list of free elements. size: 12.519348 MiB 00:04:50.161 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:50.161 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:50.161 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:50.161 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:50.161 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:50.161 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:50.161 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:50.161 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:50.161 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:50.161 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:50.161 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:50.161 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:50.161 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:50.161 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:50.161 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:50.161 list of standard malloc elements. size: 199.218079 MiB 00:04:50.161 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:50.161 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:50.161 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:50.161 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:50.161 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:50.161 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:50.161 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:50.161 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:50.161 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:50.161 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:50.161 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:50.161 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:50.161 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:50.161 list of memzone associated elements. size: 602.262573 MiB 00:04:50.161 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:50.161 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.161 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:50.161 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.161 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:50.161 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2382790_0 00:04:50.161 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:50.161 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2382790_0 00:04:50.161 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:50.161 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2382790_0 00:04:50.161 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:50.161 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.161 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:50.161 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.161 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:50.161 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2382790 00:04:50.161 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:50.161 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2382790 00:04:50.161 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:50.161 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2382790 00:04:50.161 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:50.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.161 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:50.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.161 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:50.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.161 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:50.161 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.161 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:50.161 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2382790 00:04:50.161 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:50.161 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2382790 00:04:50.161 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:50.161 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2382790 00:04:50.161 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:50.161 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2382790 00:04:50.161 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:50.161 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2382790 00:04:50.161 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:50.161 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.161 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:50.161 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.161 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:50.161 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.161 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:50.161 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2382790 00:04:50.161 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:50.161 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.161 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:50.161 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.161 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:50.161 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2382790 00:04:50.161 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:50.161 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.161 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:50.161 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2382790 00:04:50.161 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:50.161 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2382790 00:04:50.161 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:50.161 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.161 12:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.161 12:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2382790 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 2382790 ']' 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 2382790 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2382790 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2382790' 00:04:50.161 killing process with pid 2382790 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 2382790 00:04:50.161 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 2382790 00:04:50.421 00:04:50.421 real 0m1.412s 00:04:50.421 user 0m1.446s 00:04:50.421 sys 0m0.435s 00:04:50.421 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:50.421 12:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.421 ************************************ 00:04:50.421 END TEST dpdk_mem_utility 00:04:50.421 ************************************ 00:04:50.421 12:25:34 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:50.421 12:25:34 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:50.421 12:25:34 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:50.421 12:25:34 -- common/autotest_common.sh@10 -- # set +x 00:04:50.421 ************************************ 00:04:50.421 START TEST event 00:04:50.421 ************************************ 00:04:50.421 12:25:34 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:50.679 * Looking for test storage... 00:04:50.679 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:50.679 12:25:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:50.679 12:25:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:50.679 12:25:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.679 12:25:35 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:04:50.679 12:25:35 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:50.679 12:25:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.679 ************************************ 00:04:50.679 START TEST event_perf 00:04:50.679 ************************************ 00:04:50.679 12:25:35 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.679 Running I/O for 1 seconds...[2024-05-15 12:25:35.161472] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:50.679 [2024-05-15 12:25:35.161554] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383115 ] 00:04:50.679 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.679 [2024-05-15 12:25:35.234162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.937 [2024-05-15 12:25:35.310546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.937 [2024-05-15 12:25:35.310644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.937 [2024-05-15 12:25:35.310704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.937 [2024-05-15 12:25:35.310706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.868 Running I/O for 1 seconds... 00:04:51.868 lcore 0: 199360 00:04:51.868 lcore 1: 199362 00:04:51.868 lcore 2: 199362 00:04:51.868 lcore 3: 199361 00:04:51.868 done. 00:04:51.868 00:04:51.868 real 0m1.233s 00:04:51.868 user 0m4.128s 00:04:51.868 sys 0m0.100s 00:04:51.868 12:25:36 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:51.868 12:25:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.868 ************************************ 00:04:51.868 END TEST event_perf 00:04:51.868 ************************************ 00:04:51.868 12:25:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.868 12:25:36 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:51.868 12:25:36 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:51.868 12:25:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.868 ************************************ 00:04:51.868 START TEST event_reactor 00:04:51.868 ************************************ 00:04:51.868 12:25:36 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.868 [2024-05-15 12:25:36.484329] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:51.868 [2024-05-15 12:25:36.484418] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383409 ] 00:04:52.126 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.126 [2024-05-15 12:25:36.555667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.126 [2024-05-15 12:25:36.627084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.498 test_start 00:04:53.498 oneshot 00:04:53.498 tick 100 00:04:53.498 tick 100 00:04:53.498 tick 250 00:04:53.498 tick 100 00:04:53.498 tick 100 00:04:53.498 tick 100 00:04:53.498 tick 250 00:04:53.498 tick 500 00:04:53.498 tick 100 00:04:53.498 tick 100 00:04:53.498 tick 250 00:04:53.498 tick 100 00:04:53.498 tick 100 00:04:53.498 test_end 00:04:53.498 00:04:53.498 real 0m1.226s 00:04:53.498 user 0m1.131s 00:04:53.498 sys 0m0.091s 00:04:53.498 12:25:37 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.498 12:25:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.498 ************************************ 00:04:53.498 END TEST event_reactor 00:04:53.498 ************************************ 00:04:53.498 12:25:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.498 12:25:37 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:53.498 12:25:37 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.498 12:25:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.498 ************************************ 00:04:53.498 START TEST event_reactor_perf 00:04:53.498 ************************************ 00:04:53.498 12:25:37 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.498 [2024-05-15 12:25:37.799968] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:53.498 [2024-05-15 12:25:37.800050] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383690 ] 00:04:53.498 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.498 [2024-05-15 12:25:37.873088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.498 [2024-05-15 12:25:37.942811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.430 test_start 00:04:54.430 test_end 00:04:54.430 Performance: 964317 events per second 00:04:54.430 00:04:54.430 real 0m1.228s 00:04:54.430 user 0m1.136s 00:04:54.430 sys 0m0.088s 00:04:54.430 12:25:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.430 12:25:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.430 ************************************ 00:04:54.430 END TEST event_reactor_perf 00:04:54.430 ************************************ 00:04:54.688 12:25:39 event -- event/event.sh@49 -- # uname -s 00:04:54.688 12:25:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.688 12:25:39 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.688 12:25:39 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:54.688 12:25:39 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:54.688 12:25:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.688 ************************************ 00:04:54.688 START TEST event_scheduler 00:04:54.688 ************************************ 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.688 * Looking for test storage... 00:04:54.688 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:54.688 12:25:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.688 12:25:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2383998 00:04:54.688 12:25:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.688 12:25:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2383998 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 2383998 ']' 00:04:54.688 12:25:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:54.688 12:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.688 [2024-05-15 12:25:39.225622] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:04:54.688 [2024-05-15 12:25:39.225672] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383998 ] 00:04:54.688 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.688 [2024-05-15 12:25:39.288922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.945 [2024-05-15 12:25:39.371888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.945 [2024-05-15 12:25:39.371970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.945 [2024-05-15 12:25:39.372057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.945 [2024-05-15 12:25:39.372059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:04:55.509 12:25:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.509 POWER: Env isn't set yet! 00:04:55.509 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:55.509 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.509 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.509 POWER: Attempting to initialise PSTAT power management... 00:04:55.509 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:55.509 POWER: Initialized successfully for lcore 0 power management 00:04:55.509 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:55.509 POWER: Initialized successfully for lcore 1 power management 00:04:55.509 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:55.509 POWER: Initialized successfully for lcore 2 power management 00:04:55.509 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:55.509 POWER: Initialized successfully for lcore 3 power management 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.509 12:25:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.509 12:25:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 [2024-05-15 12:25:40.185345] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.767 12:25:40 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.767 12:25:40 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:55.767 12:25:40 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 ************************************ 00:04:55.767 START TEST scheduler_create_thread 00:04:55.767 ************************************ 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 2 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 3 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 4 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 5 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 6 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 7 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 8 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 9 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.767 10 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.767 12:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.699 12:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.699 12:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.699 12:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.699 12:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.699 12:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.660 12:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:57.660 12:25:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:57.660 12:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:57.660 12:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.594 12:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:58.594 12:25:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.594 12:25:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.594 12:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:58.594 12:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.523 12:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:59.523 00:04:59.523 real 0m3.557s 00:04:59.523 user 0m0.025s 00:04:59.523 sys 0m0.006s 00:04:59.523 12:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:59.523 12:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.523 ************************************ 00:04:59.523 END TEST scheduler_create_thread 00:04:59.523 ************************************ 00:04:59.523 12:25:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:59.523 12:25:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2383998 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 2383998 ']' 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 2383998 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2383998 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2383998' 00:04:59.523 killing process with pid 2383998 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 2383998 00:04:59.523 12:25:43 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 2383998 00:04:59.782 [2024-05-15 12:25:44.173866] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:59.782 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:59.782 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:59.782 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:59.782 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:59.782 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:59.782 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:59.782 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:59.782 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:00.040 00:05:00.040 real 0m5.313s 00:05:00.040 user 0m11.232s 00:05:00.040 sys 0m0.414s 00:05:00.040 12:25:44 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:00.040 12:25:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 ************************************ 00:05:00.040 END TEST event_scheduler 00:05:00.040 ************************************ 00:05:00.040 12:25:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.040 12:25:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.040 12:25:44 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.040 12:25:44 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.040 12:25:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 ************************************ 00:05:00.040 START TEST app_repeat 00:05:00.040 ************************************ 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2384867 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2384867' 00:05:00.040 Process app_repeat pid: 2384867 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.040 spdk_app_start Round 0 00:05:00.040 12:25:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2384867 /var/tmp/spdk-nbd.sock 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2384867 ']' 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.040 12:25:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 [2024-05-15 12:25:44.545348] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:00.040 [2024-05-15 12:25:44.545467] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384867 ] 00:05:00.040 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.040 [2024-05-15 12:25:44.618883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.298 [2024-05-15 12:25:44.691826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.298 [2024-05-15 12:25:44.691828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.862 12:25:45 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:00.862 12:25:45 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:00.862 12:25:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.119 Malloc0 00:05:01.119 12:25:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.119 Malloc1 00:05:01.119 12:25:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.119 12:25:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.376 /dev/nbd0 00:05:01.376 12:25:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.376 12:25:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.376 1+0 records in 00:05:01.376 1+0 records out 00:05:01.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271495 s, 15.1 MB/s 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:01.376 12:25:45 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:01.376 12:25:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.376 12:25:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.376 12:25:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.633 /dev/nbd1 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.633 1+0 records in 00:05:01.633 1+0 records out 00:05:01.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236347 s, 17.3 MB/s 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:01.633 12:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.633 12:25:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.891 { 00:05:01.891 "nbd_device": "/dev/nbd0", 00:05:01.891 "bdev_name": "Malloc0" 00:05:01.891 }, 00:05:01.891 { 00:05:01.891 "nbd_device": "/dev/nbd1", 00:05:01.891 "bdev_name": "Malloc1" 00:05:01.891 } 00:05:01.891 ]' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.891 { 00:05:01.891 "nbd_device": "/dev/nbd0", 00:05:01.891 "bdev_name": "Malloc0" 00:05:01.891 }, 00:05:01.891 { 00:05:01.891 "nbd_device": "/dev/nbd1", 00:05:01.891 "bdev_name": "Malloc1" 00:05:01.891 } 00:05:01.891 ]' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.891 /dev/nbd1' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.891 /dev/nbd1' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.891 256+0 records in 00:05:01.891 256+0 records out 00:05:01.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110333 s, 95.0 MB/s 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.891 256+0 records in 00:05:01.891 256+0 records out 00:05:01.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203922 s, 51.4 MB/s 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.891 256+0 records in 00:05:01.891 256+0 records out 00:05:01.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021665 s, 48.4 MB/s 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.891 12:25:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.148 12:25:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.148 12:25:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.148 12:25:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.149 12:25:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.406 12:25:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.663 12:25:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.663 12:25:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.663 12:25:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.920 [2024-05-15 12:25:47.444921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.920 [2024-05-15 12:25:47.509936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.920 [2024-05-15 12:25:47.509938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.177 [2024-05-15 12:25:47.551218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.177 [2024-05-15 12:25:47.551258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.706 12:25:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.706 12:25:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:05.706 spdk_app_start Round 1 00:05:05.706 12:25:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2384867 /var/tmp/spdk-nbd.sock 00:05:05.706 12:25:50 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2384867 ']' 00:05:05.706 12:25:50 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.706 12:25:50 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:05.706 12:25:50 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.706 12:25:50 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:05.706 12:25:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.966 12:25:50 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:05.966 12:25:50 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:05.966 12:25:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.223 Malloc0 00:05:06.223 12:25:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.223 Malloc1 00:05:06.223 12:25:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.223 12:25:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.480 /dev/nbd0 00:05:06.480 12:25:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.480 12:25:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.480 1+0 records in 00:05:06.480 1+0 records out 00:05:06.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231888 s, 17.7 MB/s 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:06.480 12:25:50 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:06.480 12:25:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.480 12:25:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.480 12:25:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.737 /dev/nbd1 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.737 1+0 records in 00:05:06.737 1+0 records out 00:05:06.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271884 s, 15.1 MB/s 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:06.737 12:25:51 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.737 12:25:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.994 12:25:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.994 { 00:05:06.994 "nbd_device": "/dev/nbd0", 00:05:06.994 "bdev_name": "Malloc0" 00:05:06.994 }, 00:05:06.994 { 00:05:06.994 "nbd_device": "/dev/nbd1", 00:05:06.994 "bdev_name": "Malloc1" 00:05:06.994 } 00:05:06.994 ]' 00:05:06.994 12:25:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.994 { 00:05:06.994 "nbd_device": "/dev/nbd0", 00:05:06.994 "bdev_name": "Malloc0" 00:05:06.994 }, 00:05:06.994 { 00:05:06.994 "nbd_device": "/dev/nbd1", 00:05:06.994 "bdev_name": "Malloc1" 00:05:06.994 } 00:05:06.994 ]' 00:05:06.994 12:25:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.994 12:25:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.994 /dev/nbd1' 00:05:06.994 12:25:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.994 /dev/nbd1' 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.995 256+0 records in 00:05:06.995 256+0 records out 00:05:06.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108796 s, 96.4 MB/s 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.995 256+0 records in 00:05:06.995 256+0 records out 00:05:06.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202133 s, 51.9 MB/s 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.995 256+0 records in 00:05:06.995 256+0 records out 00:05:06.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217828 s, 48.1 MB/s 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.995 12:25:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.251 12:25:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.507 12:25:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.507 12:25:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.507 12:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.507 12:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.764 12:25:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.764 12:25:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.764 12:25:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.021 [2024-05-15 12:25:52.516233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.021 [2024-05-15 12:25:52.584029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.022 [2024-05-15 12:25:52.584032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.022 [2024-05-15 12:25:52.626080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.022 [2024-05-15 12:25:52.626127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.295 12:25:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.295 12:25:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.295 spdk_app_start Round 2 00:05:11.295 12:25:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2384867 /var/tmp/spdk-nbd.sock 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2384867 ']' 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:11.295 12:25:55 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:11.295 12:25:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.295 Malloc0 00:05:11.295 12:25:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.295 Malloc1 00:05:11.295 12:25:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.295 12:25:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.553 /dev/nbd0 00:05:11.553 12:25:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.553 12:25:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.553 1+0 records in 00:05:11.553 1+0 records out 00:05:11.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267787 s, 15.3 MB/s 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:11.553 12:25:56 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:11.553 12:25:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.553 12:25:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.553 12:25:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.810 /dev/nbd1 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.810 1+0 records in 00:05:11.810 1+0 records out 00:05:11.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235263 s, 17.4 MB/s 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:11.810 12:25:56 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.810 12:25:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.067 { 00:05:12.067 "nbd_device": "/dev/nbd0", 00:05:12.067 "bdev_name": "Malloc0" 00:05:12.067 }, 00:05:12.067 { 00:05:12.067 "nbd_device": "/dev/nbd1", 00:05:12.067 "bdev_name": "Malloc1" 00:05:12.067 } 00:05:12.067 ]' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.067 { 00:05:12.067 "nbd_device": "/dev/nbd0", 00:05:12.067 "bdev_name": "Malloc0" 00:05:12.067 }, 00:05:12.067 { 00:05:12.067 "nbd_device": "/dev/nbd1", 00:05:12.067 "bdev_name": "Malloc1" 00:05:12.067 } 00:05:12.067 ]' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.067 /dev/nbd1' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.067 /dev/nbd1' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.067 256+0 records in 00:05:12.067 256+0 records out 00:05:12.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105912 s, 99.0 MB/s 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.067 256+0 records in 00:05:12.067 256+0 records out 00:05:12.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203172 s, 51.6 MB/s 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.067 256+0 records in 00:05:12.067 256+0 records out 00:05:12.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021681 s, 48.4 MB/s 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.067 12:25:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.324 12:25:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.581 12:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.581 12:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.581 12:25:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.581 12:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.581 12:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.581 12:25:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.581 12:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.838 12:25:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.838 12:25:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.838 12:25:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.095 [2024-05-15 12:25:57.608377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.095 [2024-05-15 12:25:57.673350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.095 [2024-05-15 12:25:57.673352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.352 [2024-05-15 12:25:57.715760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.352 [2024-05-15 12:25:57.715800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.871 12:26:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2384867 /var/tmp/spdk-nbd.sock 00:05:15.871 12:26:00 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2384867 ']' 00:05:15.871 12:26:00 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.871 12:26:00 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:15.871 12:26:00 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.871 12:26:00 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:15.871 12:26:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:16.127 12:26:00 event.app_repeat -- event/event.sh@39 -- # killprocess 2384867 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 2384867 ']' 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 2384867 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2384867 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2384867' 00:05:16.127 killing process with pid 2384867 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@966 -- # kill 2384867 00:05:16.127 12:26:00 event.app_repeat -- common/autotest_common.sh@971 -- # wait 2384867 00:05:16.384 spdk_app_start is called in Round 0. 00:05:16.384 Shutdown signal received, stop current app iteration 00:05:16.384 Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 reinitialization... 00:05:16.384 spdk_app_start is called in Round 1. 00:05:16.384 Shutdown signal received, stop current app iteration 00:05:16.384 Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 reinitialization... 00:05:16.384 spdk_app_start is called in Round 2. 00:05:16.384 Shutdown signal received, stop current app iteration 00:05:16.384 Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 reinitialization... 00:05:16.384 spdk_app_start is called in Round 3. 00:05:16.384 Shutdown signal received, stop current app iteration 00:05:16.384 12:26:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.384 12:26:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.384 00:05:16.384 real 0m16.307s 00:05:16.384 user 0m34.542s 00:05:16.384 sys 0m3.150s 00:05:16.384 12:26:00 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:16.384 12:26:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.384 ************************************ 00:05:16.384 END TEST app_repeat 00:05:16.384 ************************************ 00:05:16.384 12:26:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.384 12:26:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.384 12:26:00 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:16.384 12:26:00 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:16.384 12:26:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.384 ************************************ 00:05:16.384 START TEST cpu_locks 00:05:16.384 ************************************ 00:05:16.385 12:26:00 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.642 * Looking for test storage... 00:05:16.642 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:16.642 12:26:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.642 12:26:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.642 12:26:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.642 12:26:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.642 12:26:01 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:16.642 12:26:01 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:16.642 12:26:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.642 ************************************ 00:05:16.642 START TEST default_locks 00:05:16.642 ************************************ 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2388021 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2388021 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2388021 ']' 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:16.642 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.642 [2024-05-15 12:26:01.087215] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:16.642 [2024-05-15 12:26:01.087297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388021 ] 00:05:16.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.642 [2024-05-15 12:26:01.156189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.642 [2024-05-15 12:26:01.227704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.572 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:17.572 12:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:17.572 12:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2388021 00:05:17.572 12:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2388021 00:05:17.572 12:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.136 lslocks: write error 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2388021 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 2388021 ']' 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 2388021 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2388021 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2388021' 00:05:18.136 killing process with pid 2388021 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 2388021 00:05:18.136 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 2388021 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2388021 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2388021 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2388021 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2388021 ']' 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.394 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2388021) - No such process 00:05:18.394 ERROR: process (pid: 2388021) is no longer running 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.394 00:05:18.394 real 0m1.786s 00:05:18.394 user 0m1.875s 00:05:18.394 sys 0m0.631s 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.394 12:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.394 ************************************ 00:05:18.394 END TEST default_locks 00:05:18.394 ************************************ 00:05:18.394 12:26:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.394 12:26:02 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:18.394 12:26:02 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.394 12:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.394 ************************************ 00:05:18.394 START TEST default_locks_via_rpc 00:05:18.394 ************************************ 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2388324 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2388324 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2388324 ']' 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:18.394 12:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.394 [2024-05-15 12:26:02.961282] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:18.394 [2024-05-15 12:26:02.961348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388324 ] 00:05:18.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.655 [2024-05-15 12:26:03.031396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.655 [2024-05-15 12:26:03.102385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.221 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.222 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.222 12:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.222 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2388324 00:05:19.222 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2388324 00:05:19.222 12:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2388324 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 2388324 ']' 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 2388324 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2388324 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2388324' 00:05:19.787 killing process with pid 2388324 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 2388324 00:05:19.787 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 2388324 00:05:20.044 00:05:20.044 real 0m1.724s 00:05:20.044 user 0m1.819s 00:05:20.044 sys 0m0.598s 00:05:20.044 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:20.044 12:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.044 ************************************ 00:05:20.044 END TEST default_locks_via_rpc 00:05:20.044 ************************************ 00:05:20.302 12:26:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.302 12:26:04 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:20.302 12:26:04 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:20.302 12:26:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.302 ************************************ 00:05:20.302 START TEST non_locking_app_on_locked_coremask 00:05:20.302 ************************************ 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2388715 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2388715 /var/tmp/spdk.sock 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2388715 ']' 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:20.302 12:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.302 [2024-05-15 12:26:04.770202] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:20.302 [2024-05-15 12:26:04.770282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388715 ] 00:05:20.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.302 [2024-05-15 12:26:04.840589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.302 [2024-05-15 12:26:04.918585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2388880 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2388880 /var/tmp/spdk2.sock 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2388880 ']' 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:21.234 12:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.234 [2024-05-15 12:26:05.610917] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:21.234 [2024-05-15 12:26:05.610981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388880 ] 00:05:21.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.234 [2024-05-15 12:26:05.700666] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.234 [2024-05-15 12:26:05.700687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.234 [2024-05-15 12:26:05.844582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.166 12:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:22.166 12:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:22.166 12:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2388715 00:05:22.166 12:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2388715 00:05:22.166 12:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.156 lslocks: write error 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2388715 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2388715 ']' 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2388715 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2388715 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2388715' 00:05:23.156 killing process with pid 2388715 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2388715 00:05:23.156 12:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2388715 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2388880 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2388880 ']' 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2388880 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2388880 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2388880' 00:05:23.722 killing process with pid 2388880 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2388880 00:05:23.722 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2388880 00:05:24.288 00:05:24.288 real 0m3.891s 00:05:24.288 user 0m4.132s 00:05:24.288 sys 0m1.302s 00:05:24.288 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.288 12:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.288 ************************************ 00:05:24.288 END TEST non_locking_app_on_locked_coremask 00:05:24.288 ************************************ 00:05:24.288 12:26:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.288 12:26:08 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:24.288 12:26:08 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.288 12:26:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.288 ************************************ 00:05:24.288 START TEST locking_app_on_unlocked_coremask 00:05:24.288 ************************************ 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2389455 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2389455 /var/tmp/spdk.sock 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2389455 ']' 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:24.288 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.289 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:24.289 12:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.289 [2024-05-15 12:26:08.748999] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:24.289 [2024-05-15 12:26:08.749066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389455 ] 00:05:24.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.289 [2024-05-15 12:26:08.818417] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.289 [2024-05-15 12:26:08.818443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.289 [2024-05-15 12:26:08.885446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2389678 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2389678 /var/tmp/spdk2.sock 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2389678 ']' 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:25.221 12:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.221 [2024-05-15 12:26:09.578492] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:25.221 [2024-05-15 12:26:09.578544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389678 ] 00:05:25.221 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.221 [2024-05-15 12:26:09.670446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.221 [2024-05-15 12:26:09.814597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.152 12:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:26.152 12:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:26.152 12:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2389678 00:05:26.152 12:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2389678 00:05:26.152 12:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.083 lslocks: write error 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2389455 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2389455 ']' 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2389455 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2389455 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2389455' 00:05:27.083 killing process with pid 2389455 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2389455 00:05:27.083 12:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2389455 00:05:27.648 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2389678 00:05:27.648 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2389678 ']' 00:05:27.648 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2389678 00:05:27.648 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:27.648 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:27.648 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2389678 00:05:27.649 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:27.649 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:27.649 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2389678' 00:05:27.649 killing process with pid 2389678 00:05:27.649 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2389678 00:05:27.649 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2389678 00:05:27.906 00:05:27.906 real 0m3.761s 00:05:27.906 user 0m4.007s 00:05:27.906 sys 0m1.200s 00:05:27.906 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:27.906 12:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.906 ************************************ 00:05:27.906 END TEST locking_app_on_unlocked_coremask 00:05:27.906 ************************************ 00:05:28.164 12:26:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:28.164 12:26:12 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:28.164 12:26:12 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.164 12:26:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.164 ************************************ 00:05:28.164 START TEST locking_app_on_locked_coremask 00:05:28.164 ************************************ 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2390200 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2390200 /var/tmp/spdk.sock 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2390200 ']' 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:28.164 12:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.164 [2024-05-15 12:26:12.594185] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:28.164 [2024-05-15 12:26:12.594265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390200 ] 00:05:28.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.164 [2024-05-15 12:26:12.662867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.164 [2024-05-15 12:26:12.741151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2390303 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2390303 /var/tmp/spdk2.sock 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2390303 /var/tmp/spdk2.sock 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2390303 /var/tmp/spdk2.sock 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2390303 ']' 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:29.116 12:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.116 [2024-05-15 12:26:13.432043] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:29.116 [2024-05-15 12:26:13.432133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390303 ] 00:05:29.116 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.116 [2024-05-15 12:26:13.523482] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2390200 has claimed it. 00:05:29.116 [2024-05-15 12:26:13.523513] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.681 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2390303) - No such process 00:05:29.681 ERROR: process (pid: 2390303) is no longer running 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2390200 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2390200 00:05:29.681 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.247 lslocks: write error 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2390200 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2390200 ']' 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2390200 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2390200 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2390200' 00:05:30.247 killing process with pid 2390200 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2390200 00:05:30.247 12:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2390200 00:05:30.504 00:05:30.504 real 0m2.470s 00:05:30.504 user 0m2.674s 00:05:30.504 sys 0m0.773s 00:05:30.504 12:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:30.504 12:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.504 ************************************ 00:05:30.504 END TEST locking_app_on_locked_coremask 00:05:30.504 ************************************ 00:05:30.504 12:26:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.504 12:26:15 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:30.504 12:26:15 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:30.504 12:26:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.764 ************************************ 00:05:30.764 START TEST locking_overlapped_coremask 00:05:30.764 ************************************ 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2390598 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2390598 /var/tmp/spdk.sock 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2390598 ']' 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:30.764 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.764 [2024-05-15 12:26:15.149994] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:30.764 [2024-05-15 12:26:15.150075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390598 ] 00:05:30.764 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.764 [2024-05-15 12:26:15.219842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.764 [2024-05-15 12:26:15.299963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.764 [2024-05-15 12:26:15.300056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.764 [2024-05-15 12:26:15.300059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2390865 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2390865 /var/tmp/spdk2.sock 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2390865 /var/tmp/spdk2.sock 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2390865 /var/tmp/spdk2.sock 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2390865 ']' 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:31.694 12:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 [2024-05-15 12:26:15.994661] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:31.694 [2024-05-15 12:26:15.994731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390865 ] 00:05:31.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.694 [2024-05-15 12:26:16.087732] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2390598 has claimed it. 00:05:31.694 [2024-05-15 12:26:16.087767] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.257 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2390865) - No such process 00:05:32.257 ERROR: process (pid: 2390865) is no longer running 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2390598 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 2390598 ']' 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 2390598 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2390598 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2390598' 00:05:32.257 killing process with pid 2390598 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 2390598 00:05:32.257 12:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 2390598 00:05:32.514 00:05:32.514 real 0m1.881s 00:05:32.514 user 0m5.251s 00:05:32.514 sys 0m0.481s 00:05:32.514 12:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:32.514 12:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.514 ************************************ 00:05:32.514 END TEST locking_overlapped_coremask 00:05:32.514 ************************************ 00:05:32.514 12:26:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.515 12:26:17 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:32.515 12:26:17 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:32.515 12:26:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.515 ************************************ 00:05:32.515 START TEST locking_overlapped_coremask_via_rpc 00:05:32.515 ************************************ 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2391027 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2391027 /var/tmp/spdk.sock 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2391027 ']' 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:32.515 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.515 [2024-05-15 12:26:17.117689] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:32.515 [2024-05-15 12:26:17.117750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391027 ] 00:05:32.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.772 [2024-05-15 12:26:17.187079] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.772 [2024-05-15 12:26:17.187103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.772 [2024-05-15 12:26:17.265092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.772 [2024-05-15 12:26:17.265108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.772 [2024-05-15 12:26:17.265109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2391173 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2391173 /var/tmp/spdk2.sock 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2391173 ']' 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:33.337 12:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.595 [2024-05-15 12:26:17.955043] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:33.595 [2024-05-15 12:26:17.955135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391173 ] 00:05:33.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.595 [2024-05-15 12:26:18.049501] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.595 [2024-05-15 12:26:18.049530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.595 [2024-05-15 12:26:18.195187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.595 [2024-05-15 12:26:18.198433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.595 [2024-05-15 12:26:18.198434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:34.159 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:34.159 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.416 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.417 [2024-05-15 12:26:18.796456] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2391027 has claimed it. 00:05:34.417 request: 00:05:34.417 { 00:05:34.417 "method": "framework_enable_cpumask_locks", 00:05:34.417 "req_id": 1 00:05:34.417 } 00:05:34.417 Got JSON-RPC error response 00:05:34.417 response: 00:05:34.417 { 00:05:34.417 "code": -32603, 00:05:34.417 "message": "Failed to claim CPU core: 2" 00:05:34.417 } 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2391027 /var/tmp/spdk.sock 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2391027 ']' 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2391173 /var/tmp/spdk2.sock 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2391173 ']' 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:34.417 12:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.674 00:05:34.674 real 0m2.078s 00:05:34.674 user 0m0.800s 00:05:34.674 sys 0m0.208s 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.674 12:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.674 ************************************ 00:05:34.674 END TEST locking_overlapped_coremask_via_rpc 00:05:34.674 ************************************ 00:05:34.674 12:26:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.674 12:26:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2391027 ]] 00:05:34.674 12:26:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2391027 00:05:34.674 12:26:19 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2391027 ']' 00:05:34.674 12:26:19 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2391027 00:05:34.674 12:26:19 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:34.674 12:26:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:34.675 12:26:19 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2391027 00:05:34.675 12:26:19 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:34.675 12:26:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:34.675 12:26:19 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2391027' 00:05:34.675 killing process with pid 2391027 00:05:34.675 12:26:19 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2391027 00:05:34.675 12:26:19 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2391027 00:05:35.239 12:26:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2391173 ]] 00:05:35.239 12:26:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2391173 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2391173 ']' 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2391173 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2391173 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2391173' 00:05:35.239 killing process with pid 2391173 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2391173 00:05:35.239 12:26:19 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2391173 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2391027 ]] 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2391027 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2391027 ']' 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2391027 00:05:35.497 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2391027) - No such process 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2391027 is not found' 00:05:35.497 Process with pid 2391027 is not found 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2391173 ]] 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2391173 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2391173 ']' 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2391173 00:05:35.497 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2391173) - No such process 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2391173 is not found' 00:05:35.497 Process with pid 2391173 is not found 00:05:35.497 12:26:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.497 00:05:35.497 real 0m19.040s 00:05:35.497 user 0m31.084s 00:05:35.497 sys 0m6.267s 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.497 12:26:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.497 ************************************ 00:05:35.497 END TEST cpu_locks 00:05:35.497 ************************************ 00:05:35.497 00:05:35.497 real 0m45.003s 00:05:35.497 user 1m23.466s 00:05:35.497 sys 0m10.570s 00:05:35.497 12:26:19 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.497 12:26:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.497 ************************************ 00:05:35.497 END TEST event 00:05:35.497 ************************************ 00:05:35.497 12:26:20 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:35.497 12:26:20 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:35.497 12:26:20 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.497 12:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:35.497 ************************************ 00:05:35.497 START TEST thread 00:05:35.497 ************************************ 00:05:35.497 12:26:20 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:35.755 * Looking for test storage... 00:05:35.755 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:35.755 12:26:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.755 12:26:20 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:35.755 12:26:20 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.755 12:26:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.755 ************************************ 00:05:35.755 START TEST thread_poller_perf 00:05:35.755 ************************************ 00:05:35.755 12:26:20 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.755 [2024-05-15 12:26:20.250010] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:35.755 [2024-05-15 12:26:20.250093] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391708 ] 00:05:35.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.755 [2024-05-15 12:26:20.323852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.013 [2024-05-15 12:26:20.397947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.013 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.945 ====================================== 00:05:36.945 busy:2504766196 (cyc) 00:05:36.945 total_run_count: 873000 00:05:36.945 tsc_hz: 2500000000 (cyc) 00:05:36.945 ====================================== 00:05:36.945 poller_cost: 2869 (cyc), 1147 (nsec) 00:05:36.945 00:05:36.945 real 0m1.232s 00:05:36.945 user 0m1.138s 00:05:36.945 sys 0m0.090s 00:05:36.945 12:26:21 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:36.945 12:26:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.945 ************************************ 00:05:36.945 END TEST thread_poller_perf 00:05:36.945 ************************************ 00:05:36.945 12:26:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.945 12:26:21 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:36.945 12:26:21 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:36.945 12:26:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.945 ************************************ 00:05:36.945 START TEST thread_poller_perf 00:05:36.945 ************************************ 00:05:36.945 12:26:21 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.202 [2024-05-15 12:26:21.564814] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:37.202 [2024-05-15 12:26:21.564894] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391892 ] 00:05:37.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.202 [2024-05-15 12:26:21.637242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.202 [2024-05-15 12:26:21.708392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.202 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:38.573 ====================================== 00:05:38.573 busy:2501493062 (cyc) 00:05:38.573 total_run_count: 13903000 00:05:38.573 tsc_hz: 2500000000 (cyc) 00:05:38.573 ====================================== 00:05:38.573 poller_cost: 179 (cyc), 71 (nsec) 00:05:38.573 00:05:38.573 real 0m1.227s 00:05:38.573 user 0m1.129s 00:05:38.573 sys 0m0.093s 00:05:38.573 12:26:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:38.573 12:26:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 ************************************ 00:05:38.573 END TEST thread_poller_perf 00:05:38.573 ************************************ 00:05:38.573 12:26:22 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:38.573 12:26:22 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:38.573 12:26:22 thread -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:38.573 12:26:22 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:38.573 12:26:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 ************************************ 00:05:38.573 START TEST thread_spdk_lock 00:05:38.573 ************************************ 00:05:38.573 12:26:22 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:38.573 [2024-05-15 12:26:22.879115] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:38.573 [2024-05-15 12:26:22.879195] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392117 ] 00:05:38.573 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.573 [2024-05-15 12:26:22.953748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.573 [2024-05-15 12:26:23.031447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.573 [2024-05-15 12:26:23.031452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.137 [2024-05-15 12:26:23.522350] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:39.137 [2024-05-15 12:26:23.522387] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:39.137 [2024-05-15 12:26:23.522398] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14b75c0 00:05:39.137 [2024-05-15 12:26:23.523293] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:39.137 [2024-05-15 12:26:23.523396] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:39.137 [2024-05-15 12:26:23.523415] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:39.137 Starting test contend 00:05:39.137 Worker Delay Wait us Hold us Total us 00:05:39.137 0 3 177502 185443 362945 00:05:39.137 1 5 90587 286536 377124 00:05:39.137 PASS test contend 00:05:39.137 Starting test hold_by_poller 00:05:39.137 PASS test hold_by_poller 00:05:39.137 Starting test hold_by_message 00:05:39.137 PASS test hold_by_message 00:05:39.137 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:39.137 100014 assertions passed 00:05:39.137 0 assertions failed 00:05:39.137 00:05:39.137 real 0m0.724s 00:05:39.137 user 0m1.123s 00:05:39.137 sys 0m0.088s 00:05:39.137 12:26:23 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:39.137 12:26:23 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 ************************************ 00:05:39.137 END TEST thread_spdk_lock 00:05:39.137 ************************************ 00:05:39.137 00:05:39.137 real 0m3.545s 00:05:39.137 user 0m3.516s 00:05:39.137 sys 0m0.523s 00:05:39.137 12:26:23 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:39.137 12:26:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 ************************************ 00:05:39.137 END TEST thread 00:05:39.137 ************************************ 00:05:39.137 12:26:23 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:39.137 12:26:23 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:39.137 12:26:23 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:39.137 12:26:23 -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 ************************************ 00:05:39.137 START TEST accel 00:05:39.137 ************************************ 00:05:39.137 12:26:23 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:39.395 * Looking for test storage... 00:05:39.395 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:39.395 12:26:23 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:39.395 12:26:23 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:39.395 12:26:23 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.395 12:26:23 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2392437 00:05:39.395 12:26:23 accel -- accel/accel.sh@63 -- # waitforlisten 2392437 00:05:39.395 12:26:23 accel -- common/autotest_common.sh@828 -- # '[' -z 2392437 ']' 00:05:39.395 12:26:23 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.395 12:26:23 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:39.395 12:26:23 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:39.395 12:26:23 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.395 12:26:23 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:39.395 12:26:23 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:39.395 12:26:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.395 12:26:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.395 12:26:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.395 12:26:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.395 12:26:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.395 12:26:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.395 12:26:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:39.395 12:26:23 accel -- accel/accel.sh@41 -- # jq -r . 00:05:39.395 [2024-05-15 12:26:23.843799] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:39.395 [2024-05-15 12:26:23.843870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392437 ] 00:05:39.395 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.395 [2024-05-15 12:26:23.915134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.395 [2024-05-15 12:26:23.994000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.329 12:26:24 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:40.329 12:26:24 accel -- common/autotest_common.sh@861 -- # return 0 00:05:40.329 12:26:24 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:40.329 12:26:24 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:40.329 12:26:24 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:40.329 12:26:24 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:40.329 12:26:24 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:40.329 12:26:24 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:40.329 12:26:24 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:40.329 12:26:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.329 12:26:24 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:40.329 12:26:24 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.329 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.329 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.329 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.330 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.330 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.330 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.330 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.330 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.330 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.330 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.330 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.330 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.330 12:26:24 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.330 12:26:24 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.330 12:26:24 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.330 12:26:24 accel -- accel/accel.sh@75 -- # killprocess 2392437 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@947 -- # '[' -z 2392437 ']' 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@951 -- # kill -0 2392437 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@952 -- # uname 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2392437 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2392437' 00:05:40.330 killing process with pid 2392437 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@966 -- # kill 2392437 00:05:40.330 12:26:24 accel -- common/autotest_common.sh@971 -- # wait 2392437 00:05:40.588 12:26:25 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:40.588 12:26:25 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:40.588 12:26:25 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:40.588 12:26:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:40.588 12:26:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.588 12:26:25 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:05:40.588 12:26:25 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.588 12:26:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:40.588 12:26:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.588 12:26:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.588 12:26:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.589 12:26:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.589 12:26:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.589 12:26:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:40.589 12:26:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:40.589 12:26:25 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:40.589 12:26:25 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 12:26:25 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.589 12:26:25 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:40.589 12:26:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:40.589 12:26:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.847 ************************************ 00:05:40.847 START TEST accel_missing_filename 00:05:40.847 ************************************ 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.847 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:40.847 12:26:25 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:40.847 [2024-05-15 12:26:25.242895] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:40.847 [2024-05-15 12:26:25.242979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392737 ] 00:05:40.847 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.847 [2024-05-15 12:26:25.315085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.847 [2024-05-15 12:26:25.390043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.847 [2024-05-15 12:26:25.429620] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.105 [2024-05-15 12:26:25.489598] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:41.105 A filename is required. 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:41.105 00:05:41.105 real 0m0.338s 00:05:41.105 user 0m0.240s 00:05:41.105 sys 0m0.136s 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.105 12:26:25 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:41.105 ************************************ 00:05:41.105 END TEST accel_missing_filename 00:05:41.105 ************************************ 00:05:41.105 12:26:25 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:41.105 12:26:25 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:41.105 12:26:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.105 12:26:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.105 ************************************ 00:05:41.105 START TEST accel_compress_verify 00:05:41.105 ************************************ 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:41.105 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:41.105 12:26:25 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:41.105 [2024-05-15 12:26:25.665497] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:41.105 [2024-05-15 12:26:25.665580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392770 ] 00:05:41.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.364 [2024-05-15 12:26:25.737529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.364 [2024-05-15 12:26:25.806123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.364 [2024-05-15 12:26:25.845136] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.364 [2024-05-15 12:26:25.903686] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:41.364 00:05:41.364 Compression does not support the verify option, aborting. 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:41.364 00:05:41.364 real 0m0.329s 00:05:41.364 user 0m0.223s 00:05:41.364 sys 0m0.130s 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.364 12:26:25 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:41.364 ************************************ 00:05:41.364 END TEST accel_compress_verify 00:05:41.364 ************************************ 00:05:41.622 12:26:26 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:41.622 12:26:26 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:41.622 12:26:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.622 12:26:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.622 ************************************ 00:05:41.622 START TEST accel_wrong_workload 00:05:41.622 ************************************ 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:41.622 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:41.622 12:26:26 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:41.622 Unsupported workload type: foobar 00:05:41.622 [2024-05-15 12:26:26.079540] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:41.622 accel_perf options: 00:05:41.622 [-h help message] 00:05:41.622 [-q queue depth per core] 00:05:41.623 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.623 [-T number of threads per core 00:05:41.623 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.623 [-t time in seconds] 00:05:41.623 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.623 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.623 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.623 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.623 [-S for crc32c workload, use this seed value (default 0) 00:05:41.623 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.623 [-f for fill workload, use this BYTE value (default 255) 00:05:41.623 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.623 [-y verify result if this switch is on] 00:05:41.623 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.623 Can be used to spread operations across a wider range of memory. 00:05:41.623 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:41.623 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:41.623 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:41.623 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:41.623 00:05:41.623 real 0m0.029s 00:05:41.623 user 0m0.012s 00:05:41.623 sys 0m0.017s 00:05:41.623 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.623 12:26:26 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:41.623 ************************************ 00:05:41.623 END TEST accel_wrong_workload 00:05:41.623 ************************************ 00:05:41.623 Error: writing output failed: Broken pipe 00:05:41.623 12:26:26 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.623 12:26:26 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:41.623 12:26:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.623 12:26:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.623 ************************************ 00:05:41.623 START TEST accel_negative_buffers 00:05:41.623 ************************************ 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:41.623 12:26:26 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:41.623 -x option must be non-negative. 00:05:41.623 [2024-05-15 12:26:26.195643] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.623 accel_perf options: 00:05:41.623 [-h help message] 00:05:41.623 [-q queue depth per core] 00:05:41.623 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.623 [-T number of threads per core 00:05:41.623 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.623 [-t time in seconds] 00:05:41.623 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.623 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.623 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.623 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.623 [-S for crc32c workload, use this seed value (default 0) 00:05:41.623 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.623 [-f for fill workload, use this BYTE value (default 255) 00:05:41.623 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.623 [-y verify result if this switch is on] 00:05:41.623 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.623 Can be used to spread operations across a wider range of memory. 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:41.623 00:05:41.623 real 0m0.029s 00:05:41.623 user 0m0.014s 00:05:41.623 sys 0m0.015s 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.623 12:26:26 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:41.623 ************************************ 00:05:41.623 END TEST accel_negative_buffers 00:05:41.623 ************************************ 00:05:41.623 Error: writing output failed: Broken pipe 00:05:41.881 12:26:26 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.881 12:26:26 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:41.881 12:26:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.881 12:26:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.881 ************************************ 00:05:41.881 START TEST accel_crc32c 00:05:41.881 ************************************ 00:05:41.881 12:26:26 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.881 12:26:26 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:41.881 12:26:26 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:41.882 [2024-05-15 12:26:26.292609] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:41.882 [2024-05-15 12:26:26.292690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392930 ] 00:05:41.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.882 [2024-05-15 12:26:26.364813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.882 [2024-05-15 12:26:26.440066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.882 12:26:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:43.255 12:26:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.255 00:05:43.255 real 0m1.343s 00:05:43.255 user 0m1.224s 00:05:43.255 sys 0m0.133s 00:05:43.255 12:26:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:43.255 12:26:27 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:43.255 ************************************ 00:05:43.255 END TEST accel_crc32c 00:05:43.255 ************************************ 00:05:43.255 12:26:27 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:43.255 12:26:27 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:43.255 12:26:27 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:43.255 12:26:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.255 ************************************ 00:05:43.256 START TEST accel_crc32c_C2 00:05:43.256 ************************************ 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.256 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:43.256 [2024-05-15 12:26:27.725308] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:43.256 [2024-05-15 12:26:27.725399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393160 ] 00:05:43.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.256 [2024-05-15 12:26:27.796951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.256 [2024-05-15 12:26:27.868518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.514 12:26:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.448 00:05:44.448 real 0m1.339s 00:05:44.448 user 0m1.220s 00:05:44.448 sys 0m0.134s 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:44.448 12:26:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.448 ************************************ 00:05:44.448 END TEST accel_crc32c_C2 00:05:44.448 ************************************ 00:05:44.706 12:26:29 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:44.706 12:26:29 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:44.706 12:26:29 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.706 12:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.706 ************************************ 00:05:44.706 START TEST accel_copy 00:05:44.706 ************************************ 00:05:44.706 12:26:29 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.706 12:26:29 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.707 12:26:29 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.707 12:26:29 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.707 12:26:29 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:44.707 12:26:29 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:44.707 [2024-05-15 12:26:29.154149] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:44.707 [2024-05-15 12:26:29.154235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393406 ] 00:05:44.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.707 [2024-05-15 12:26:29.225851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.707 [2024-05-15 12:26:29.295591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.964 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.965 12:26:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:45.991 12:26:30 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.991 00:05:45.991 real 0m1.340s 00:05:45.991 user 0m1.211s 00:05:45.991 sys 0m0.142s 00:05:45.991 12:26:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:45.991 12:26:30 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.991 ************************************ 00:05:45.991 END TEST accel_copy 00:05:45.991 ************************************ 00:05:45.991 12:26:30 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.991 12:26:30 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:45.991 12:26:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:45.991 12:26:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.991 ************************************ 00:05:45.991 START TEST accel_fill 00:05:45.991 ************************************ 00:05:45.991 12:26:30 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:45.991 12:26:30 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.992 12:26:30 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.992 12:26:30 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.992 12:26:30 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.992 12:26:30 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.992 12:26:30 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:45.992 12:26:30 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:45.992 [2024-05-15 12:26:30.586522] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:45.992 [2024-05-15 12:26:30.586619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393694 ] 00:05:46.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.250 [2024-05-15 12:26:30.656785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.250 [2024-05-15 12:26:30.731261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.250 12:26:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:47.620 12:26:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.620 00:05:47.620 real 0m1.341s 00:05:47.620 user 0m1.217s 00:05:47.620 sys 0m0.139s 00:05:47.620 12:26:31 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:47.620 12:26:31 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:47.620 ************************************ 00:05:47.620 END TEST accel_fill 00:05:47.620 ************************************ 00:05:47.620 12:26:31 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:47.620 12:26:31 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:47.620 12:26:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:47.620 12:26:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.620 ************************************ 00:05:47.620 START TEST accel_copy_crc32c 00:05:47.620 ************************************ 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.620 12:26:31 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:47.620 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:47.620 [2024-05-15 12:26:32.014496] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:47.620 [2024-05-15 12:26:32.014564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393975 ] 00:05:47.620 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.620 [2024-05-15 12:26:32.086290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.620 [2024-05-15 12:26:32.158304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.620 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.620 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.620 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.620 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.621 12:26:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.992 00:05:48.992 real 0m1.338s 00:05:48.992 user 0m1.221s 00:05:48.992 sys 0m0.133s 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:48.992 12:26:33 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:48.992 ************************************ 00:05:48.992 END TEST accel_copy_crc32c 00:05:48.992 ************************************ 00:05:48.992 12:26:33 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.992 12:26:33 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:48.992 12:26:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:48.992 12:26:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.992 ************************************ 00:05:48.992 START TEST accel_copy_crc32c_C2 00:05:48.992 ************************************ 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.992 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:48.992 [2024-05-15 12:26:33.441982] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:48.992 [2024-05-15 12:26:33.442057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394264 ] 00:05:48.992 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.992 [2024-05-15 12:26:33.513377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.992 [2024-05-15 12:26:33.583968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.250 12:26:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.182 00:05:50.182 real 0m1.339s 00:05:50.182 user 0m1.221s 00:05:50.182 sys 0m0.133s 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:50.182 12:26:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:50.182 ************************************ 00:05:50.182 END TEST accel_copy_crc32c_C2 00:05:50.182 ************************************ 00:05:50.440 12:26:34 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:50.440 12:26:34 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:50.440 12:26:34 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:50.440 12:26:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.440 ************************************ 00:05:50.440 START TEST accel_dualcast 00:05:50.440 ************************************ 00:05:50.440 12:26:34 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:50.440 12:26:34 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:50.440 [2024-05-15 12:26:34.868343] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:50.440 [2024-05-15 12:26:34.868435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394552 ] 00:05:50.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.440 [2024-05-15 12:26:34.937302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.440 [2024-05-15 12:26:35.007491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.440 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.697 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.698 12:26:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:51.628 12:26:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.628 00:05:51.628 real 0m1.336s 00:05:51.628 user 0m1.224s 00:05:51.628 sys 0m0.124s 00:05:51.628 12:26:36 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.628 12:26:36 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:51.628 ************************************ 00:05:51.628 END TEST accel_dualcast 00:05:51.628 ************************************ 00:05:51.628 12:26:36 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:51.628 12:26:36 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:51.628 12:26:36 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.628 12:26:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.884 ************************************ 00:05:51.884 START TEST accel_compare 00:05:51.884 ************************************ 00:05:51.885 12:26:36 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:51.885 [2024-05-15 12:26:36.293858] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:51.885 [2024-05-15 12:26:36.293934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394833 ] 00:05:51.885 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.885 [2024-05-15 12:26:36.366964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.885 [2024-05-15 12:26:36.439503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.885 12:26:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:53.255 12:26:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.255 00:05:53.255 real 0m1.341s 00:05:53.255 user 0m1.211s 00:05:53.255 sys 0m0.144s 00:05:53.255 12:26:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.255 12:26:37 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:53.255 ************************************ 00:05:53.255 END TEST accel_compare 00:05:53.255 ************************************ 00:05:53.255 12:26:37 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:53.255 12:26:37 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:53.255 12:26:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.255 12:26:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.255 ************************************ 00:05:53.255 START TEST accel_xor 00:05:53.255 ************************************ 00:05:53.255 12:26:37 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:53.256 12:26:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:53.256 [2024-05-15 12:26:37.723281] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:53.256 [2024-05-15 12:26:37.723363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395118 ] 00:05:53.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.256 [2024-05-15 12:26:37.794677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.256 [2024-05-15 12:26:37.868929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.514 12:26:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.446 12:26:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.446 00:05:54.446 real 0m1.342s 00:05:54.446 user 0m1.220s 00:05:54.446 sys 0m0.134s 00:05:54.446 12:26:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:54.446 12:26:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:54.446 ************************************ 00:05:54.446 END TEST accel_xor 00:05:54.446 ************************************ 00:05:54.704 12:26:39 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:54.704 12:26:39 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:54.704 12:26:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:54.704 12:26:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.704 ************************************ 00:05:54.704 START TEST accel_xor 00:05:54.704 ************************************ 00:05:54.704 12:26:39 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:54.704 12:26:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:54.704 [2024-05-15 12:26:39.153365] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:54.704 [2024-05-15 12:26:39.153456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395399 ] 00:05:54.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.704 [2024-05-15 12:26:39.224584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.704 [2024-05-15 12:26:39.297754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.962 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 12:26:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:55.896 12:26:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.896 00:05:55.896 real 0m1.344s 00:05:55.896 user 0m1.219s 00:05:55.896 sys 0m0.137s 00:05:55.896 12:26:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.896 12:26:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:55.896 ************************************ 00:05:55.896 END TEST accel_xor 00:05:55.896 ************************************ 00:05:56.154 12:26:40 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:56.154 12:26:40 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:56.154 12:26:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:56.154 12:26:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.154 ************************************ 00:05:56.154 START TEST accel_dif_verify 00:05:56.154 ************************************ 00:05:56.154 12:26:40 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:56.154 12:26:40 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:56.154 [2024-05-15 12:26:40.587774] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:56.154 [2024-05-15 12:26:40.587853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395684 ] 00:05:56.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.154 [2024-05-15 12:26:40.660067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.155 [2024-05-15 12:26:40.730905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.155 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.412 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.413 12:26:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.346 12:26:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:57.347 12:26:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.347 00:05:57.347 real 0m1.341s 00:05:57.347 user 0m1.208s 00:05:57.347 sys 0m0.148s 00:05:57.347 12:26:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:57.347 12:26:41 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:57.347 ************************************ 00:05:57.347 END TEST accel_dif_verify 00:05:57.347 ************************************ 00:05:57.347 12:26:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:57.347 12:26:41 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:57.347 12:26:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:57.347 12:26:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.605 ************************************ 00:05:57.605 START TEST accel_dif_generate 00:05:57.605 ************************************ 00:05:57.605 12:26:41 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:57.605 12:26:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:57.605 [2024-05-15 12:26:42.017420] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:57.605 [2024-05-15 12:26:42.017500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395969 ] 00:05:57.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.605 [2024-05-15 12:26:42.088765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.605 [2024-05-15 12:26:42.161211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.605 12:26:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:58.976 12:26:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.976 00:05:58.976 real 0m1.339s 00:05:58.976 user 0m1.221s 00:05:58.976 sys 0m0.133s 00:05:58.976 12:26:43 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:58.976 12:26:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:58.976 ************************************ 00:05:58.976 END TEST accel_dif_generate 00:05:58.976 ************************************ 00:05:58.976 12:26:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:58.976 12:26:43 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:58.976 12:26:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:58.976 12:26:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.976 ************************************ 00:05:58.976 START TEST accel_dif_generate_copy 00:05:58.976 ************************************ 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:58.977 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:58.977 [2024-05-15 12:26:43.448267] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:05:58.977 [2024-05-15 12:26:43.448342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396214 ] 00:05:58.977 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.977 [2024-05-15 12:26:43.521065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.234 [2024-05-15 12:26:43.594289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.234 12:26:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.166 00:06:00.166 real 0m1.342s 00:06:00.166 user 0m1.218s 00:06:00.166 sys 0m0.139s 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:00.166 12:26:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:00.166 ************************************ 00:06:00.167 END TEST accel_dif_generate_copy 00:06:00.167 ************************************ 00:06:00.424 12:26:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:00.424 12:26:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.424 12:26:44 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:00.424 12:26:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:00.424 12:26:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.424 ************************************ 00:06:00.424 START TEST accel_comp 00:06:00.424 ************************************ 00:06:00.424 12:26:44 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:00.425 12:26:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:00.425 [2024-05-15 12:26:44.878802] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:00.425 [2024-05-15 12:26:44.878881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396448 ] 00:06:00.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.425 [2024-05-15 12:26:44.950089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.425 [2024-05-15 12:26:45.021136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.682 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.683 12:26:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:01.615 12:26:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.615 00:06:01.615 real 0m1.339s 00:06:01.615 user 0m1.221s 00:06:01.615 sys 0m0.133s 00:06:01.615 12:26:46 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.615 12:26:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:01.615 ************************************ 00:06:01.615 END TEST accel_comp 00:06:01.615 ************************************ 00:06:01.873 12:26:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.873 12:26:46 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:01.873 12:26:46 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.873 12:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.873 ************************************ 00:06:01.873 START TEST accel_decomp 00:06:01.873 ************************************ 00:06:01.873 12:26:46 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:01.873 12:26:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:01.873 [2024-05-15 12:26:46.308282] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:01.873 [2024-05-15 12:26:46.308365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396685 ] 00:06:01.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.873 [2024-05-15 12:26:46.377745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.873 [2024-05-15 12:26:46.448544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.131 12:26:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.063 12:26:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.063 00:06:03.063 real 0m1.336s 00:06:03.063 user 0m1.221s 00:06:03.063 sys 0m0.129s 00:06:03.063 12:26:47 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.063 12:26:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:03.063 ************************************ 00:06:03.063 END TEST accel_decomp 00:06:03.063 ************************************ 00:06:03.063 12:26:47 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.063 12:26:47 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:03.063 12:26:47 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.063 12:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 ************************************ 00:06:03.321 START TEST accel_decmop_full 00:06:03.321 ************************************ 00:06:03.321 12:26:47 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:03.321 [2024-05-15 12:26:47.737219] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:03.321 [2024-05-15 12:26:47.737317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396902 ] 00:06:03.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.321 [2024-05-15 12:26:47.807606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.321 [2024-05-15 12:26:47.881124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.321 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.322 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.579 12:26:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.512 12:26:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.512 00:06:04.512 real 0m1.348s 00:06:04.512 user 0m1.231s 00:06:04.512 sys 0m0.131s 00:06:04.512 12:26:49 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:04.512 12:26:49 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:04.512 ************************************ 00:06:04.512 END TEST accel_decmop_full 00:06:04.512 ************************************ 00:06:04.512 12:26:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.512 12:26:49 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:04.512 12:26:49 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:04.512 12:26:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.771 ************************************ 00:06:04.771 START TEST accel_decomp_mcore 00:06:04.771 ************************************ 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:04.771 [2024-05-15 12:26:49.174131] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:04.771 [2024-05-15 12:26:49.174209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397153 ] 00:06:04.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.771 [2024-05-15 12:26:49.248573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.771 [2024-05-15 12:26:49.324258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.771 [2024-05-15 12:26:49.324350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.771 [2024-05-15 12:26:49.324448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.771 [2024-05-15 12:26:49.324451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.771 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.772 12:26:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.144 00:06:06.144 real 0m1.360s 00:06:06.144 user 0m4.566s 00:06:06.144 sys 0m0.141s 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:06.144 12:26:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.144 ************************************ 00:06:06.144 END TEST accel_decomp_mcore 00:06:06.144 ************************************ 00:06:06.144 12:26:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.144 12:26:50 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:06.144 12:26:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:06.144 12:26:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.144 ************************************ 00:06:06.144 START TEST accel_decomp_full_mcore 00:06:06.144 ************************************ 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:06.144 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:06.144 [2024-05-15 12:26:50.623454] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:06.144 [2024-05-15 12:26:50.623534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397436 ] 00:06:06.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.144 [2024-05-15 12:26:50.696098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.402 [2024-05-15 12:26:50.771554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.402 [2024-05-15 12:26:50.771574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.402 [2024-05-15 12:26:50.771595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.402 [2024-05-15 12:26:50.771597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.402 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.403 12:26:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.777 00:06:07.777 real 0m1.369s 00:06:07.777 user 0m4.593s 00:06:07.777 sys 0m0.148s 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.777 12:26:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:07.777 ************************************ 00:06:07.777 END TEST accel_decomp_full_mcore 00:06:07.777 ************************************ 00:06:07.777 12:26:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.777 12:26:52 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:07.777 12:26:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.777 12:26:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.777 ************************************ 00:06:07.777 START TEST accel_decomp_mthread 00:06:07.777 ************************************ 00:06:07.777 12:26:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.777 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:07.777 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:07.777 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:07.778 [2024-05-15 12:26:52.079239] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:07.778 [2024-05-15 12:26:52.079315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397728 ] 00:06:07.778 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.778 [2024-05-15 12:26:52.151983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.778 [2024-05-15 12:26:52.223666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.778 12:26:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.151 00:06:09.151 real 0m1.346s 00:06:09.151 user 0m1.227s 00:06:09.151 sys 0m0.133s 00:06:09.151 12:26:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:09.152 12:26:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:09.152 ************************************ 00:06:09.152 END TEST accel_decomp_mthread 00:06:09.152 ************************************ 00:06:09.152 12:26:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.152 12:26:53 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:09.152 12:26:53 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:09.152 12:26:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.152 ************************************ 00:06:09.152 START TEST accel_decomp_full_mthread 00:06:09.152 ************************************ 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:09.152 [2024-05-15 12:26:53.513676] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:09.152 [2024-05-15 12:26:53.513749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398007 ] 00:06:09.152 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.152 [2024-05-15 12:26:53.584389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.152 [2024-05-15 12:26:53.655549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.152 12:26:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.593 00:06:10.593 real 0m1.364s 00:06:10.593 user 0m1.239s 00:06:10.593 sys 0m0.140s 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.593 12:26:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:10.593 ************************************ 00:06:10.593 END TEST accel_decomp_full_mthread 00:06:10.593 ************************************ 00:06:10.593 12:26:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:10.593 12:26:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.593 12:26:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:10.593 12:26:54 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:10.593 12:26:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:10.593 12:26:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.593 12:26:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.593 12:26:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.593 12:26:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.593 12:26:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.593 12:26:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.593 12:26:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:10.593 12:26:54 accel -- accel/accel.sh@41 -- # jq -r . 00:06:10.593 ************************************ 00:06:10.593 START TEST accel_dif_functional_tests 00:06:10.593 ************************************ 00:06:10.593 12:26:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.593 [2024-05-15 12:26:54.970413] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:10.593 [2024-05-15 12:26:54.970493] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398292 ] 00:06:10.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.593 [2024-05-15 12:26:55.039128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.593 [2024-05-15 12:26:55.119984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.593 [2024-05-15 12:26:55.120001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.593 [2024-05-15 12:26:55.120003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.593 00:06:10.593 00:06:10.593 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.593 http://cunit.sourceforge.net/ 00:06:10.593 00:06:10.593 00:06:10.593 Suite: accel_dif 00:06:10.593 Test: verify: DIF generated, GUARD check ...passed 00:06:10.593 Test: verify: DIF generated, APPTAG check ...passed 00:06:10.593 Test: verify: DIF generated, REFTAG check ...passed 00:06:10.593 Test: verify: DIF not generated, GUARD check ...[2024-05-15 12:26:55.187410] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.593 [2024-05-15 12:26:55.187454] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.593 passed 00:06:10.593 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 12:26:55.187488] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.593 [2024-05-15 12:26:55.187507] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.593 passed 00:06:10.593 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 12:26:55.187528] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.593 [2024-05-15 12:26:55.187547] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.593 passed 00:06:10.593 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:10.593 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 12:26:55.187592] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:10.593 passed 00:06:10.593 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:10.594 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:10.594 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:10.594 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 12:26:55.187692] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:10.594 passed 00:06:10.594 Test: generate copy: DIF generated, GUARD check ...passed 00:06:10.594 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:10.594 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:10.594 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:10.594 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:10.594 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:10.594 Test: generate copy: iovecs-len validate ...[2024-05-15 12:26:55.187873] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:10.594 passed 00:06:10.594 Test: generate copy: buffer alignment validate ...passed 00:06:10.594 00:06:10.594 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.594 suites 1 1 n/a 0 0 00:06:10.594 tests 20 20 20 0 0 00:06:10.594 asserts 204 204 204 0 n/a 00:06:10.594 00:06:10.594 Elapsed time = 0.002 seconds 00:06:10.852 00:06:10.852 real 0m0.399s 00:06:10.852 user 0m0.556s 00:06:10.852 sys 0m0.154s 00:06:10.852 12:26:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.852 12:26:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:10.852 ************************************ 00:06:10.852 END TEST accel_dif_functional_tests 00:06:10.852 ************************************ 00:06:10.852 00:06:10.852 real 0m31.683s 00:06:10.852 user 0m34.478s 00:06:10.852 sys 0m5.198s 00:06:10.852 12:26:55 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.852 12:26:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.852 ************************************ 00:06:10.852 END TEST accel 00:06:10.852 ************************************ 00:06:10.852 12:26:55 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:10.852 12:26:55 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:10.852 12:26:55 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:10.852 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:06:11.111 ************************************ 00:06:11.111 START TEST accel_rpc 00:06:11.111 ************************************ 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:11.111 * Looking for test storage... 00:06:11.111 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:11.111 12:26:55 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.111 12:26:55 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2398467 00:06:11.111 12:26:55 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2398467 00:06:11.111 12:26:55 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 2398467 ']' 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:11.111 12:26:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.111 [2024-05-15 12:26:55.606935] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:11.111 [2024-05-15 12:26:55.606996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398467 ] 00:06:11.111 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.111 [2024-05-15 12:26:55.676823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.369 [2024-05-15 12:26:55.757147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.935 12:26:56 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:11.935 12:26:56 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:11.935 12:26:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:11.936 12:26:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:11.936 12:26:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:11.936 12:26:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:11.936 12:26:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:11.936 12:26:56 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:11.936 12:26:56 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:11.936 12:26:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.936 ************************************ 00:06:11.936 START TEST accel_assign_opcode 00:06:11.936 ************************************ 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.936 [2024-05-15 12:26:56.463267] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.936 [2024-05-15 12:26:56.471275] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.936 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.194 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.194 12:26:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:12.194 12:26:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:12.195 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.195 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.195 12:26:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:12.195 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.195 software 00:06:12.195 00:06:12.195 real 0m0.227s 00:06:12.195 user 0m0.043s 00:06:12.195 sys 0m0.014s 00:06:12.195 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.195 12:26:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.195 ************************************ 00:06:12.195 END TEST accel_assign_opcode 00:06:12.195 ************************************ 00:06:12.195 12:26:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2398467 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 2398467 ']' 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 2398467 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2398467 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2398467' 00:06:12.195 killing process with pid 2398467 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@966 -- # kill 2398467 00:06:12.195 12:26:56 accel_rpc -- common/autotest_common.sh@971 -- # wait 2398467 00:06:12.762 00:06:12.762 real 0m1.607s 00:06:12.762 user 0m1.639s 00:06:12.762 sys 0m0.485s 00:06:12.762 12:26:57 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.762 12:26:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.762 ************************************ 00:06:12.762 END TEST accel_rpc 00:06:12.762 ************************************ 00:06:12.762 12:26:57 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.762 12:26:57 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:12.762 12:26:57 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:12.762 12:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.762 ************************************ 00:06:12.762 START TEST app_cmdline 00:06:12.762 ************************************ 00:06:12.762 12:26:57 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.762 * Looking for test storage... 00:06:12.762 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:12.762 12:26:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.762 12:26:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2398856 00:06:12.762 12:26:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2398856 00:06:12.763 12:26:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.763 12:26:57 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 2398856 ']' 00:06:12.763 12:26:57 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.763 12:26:57 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:12.763 12:26:57 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.763 12:26:57 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:12.763 12:26:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.763 [2024-05-15 12:26:57.261066] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:12.763 [2024-05-15 12:26:57.261120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398856 ] 00:06:12.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.763 [2024-05-15 12:26:57.328549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.021 [2024-05-15 12:26:57.408150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.588 12:26:58 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:13.588 12:26:58 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:13.588 12:26:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:13.846 { 00:06:13.846 "version": "SPDK v24.05-pre git sha1 95a28e501", 00:06:13.846 "fields": { 00:06:13.846 "major": 24, 00:06:13.846 "minor": 5, 00:06:13.846 "patch": 0, 00:06:13.846 "suffix": "-pre", 00:06:13.846 "commit": "95a28e501" 00:06:13.846 } 00:06:13.846 } 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.846 12:26:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:13.846 12:26:58 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.105 request: 00:06:14.105 { 00:06:14.105 "method": "env_dpdk_get_mem_stats", 00:06:14.105 "req_id": 1 00:06:14.105 } 00:06:14.105 Got JSON-RPC error response 00:06:14.105 response: 00:06:14.105 { 00:06:14.105 "code": -32601, 00:06:14.105 "message": "Method not found" 00:06:14.105 } 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.105 12:26:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2398856 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 2398856 ']' 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 2398856 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2398856 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2398856' 00:06:14.105 killing process with pid 2398856 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@966 -- # kill 2398856 00:06:14.105 12:26:58 app_cmdline -- common/autotest_common.sh@971 -- # wait 2398856 00:06:14.364 00:06:14.364 real 0m1.676s 00:06:14.364 user 0m1.966s 00:06:14.364 sys 0m0.473s 00:06:14.364 12:26:58 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.364 12:26:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.364 ************************************ 00:06:14.364 END TEST app_cmdline 00:06:14.364 ************************************ 00:06:14.364 12:26:58 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:14.364 12:26:58 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:14.364 12:26:58 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.364 12:26:58 -- common/autotest_common.sh@10 -- # set +x 00:06:14.364 ************************************ 00:06:14.364 START TEST version 00:06:14.364 ************************************ 00:06:14.364 12:26:58 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:14.623 * Looking for test storage... 00:06:14.623 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:14.623 12:26:59 version -- app/version.sh@17 -- # get_header_version major 00:06:14.623 12:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # cut -f2 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.623 12:26:59 version -- app/version.sh@17 -- # major=24 00:06:14.623 12:26:59 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.623 12:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # cut -f2 00:06:14.623 12:26:59 version -- app/version.sh@18 -- # minor=5 00:06:14.623 12:26:59 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.623 12:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # cut -f2 00:06:14.623 12:26:59 version -- app/version.sh@19 -- # patch=0 00:06:14.623 12:26:59 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.623 12:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # cut -f2 00:06:14.623 12:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.623 12:26:59 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.623 12:26:59 version -- app/version.sh@22 -- # version=24.5 00:06:14.623 12:26:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.623 12:26:59 version -- app/version.sh@28 -- # version=24.5rc0 00:06:14.623 12:26:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:14.623 12:26:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.623 12:26:59 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:14.623 12:26:59 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:14.623 00:06:14.623 real 0m0.184s 00:06:14.623 user 0m0.081s 00:06:14.623 sys 0m0.143s 00:06:14.623 12:26:59 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.623 12:26:59 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.623 ************************************ 00:06:14.623 END TEST version 00:06:14.623 ************************************ 00:06:14.624 12:26:59 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.624 12:26:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.624 12:26:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.624 12:26:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.624 12:26:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:14.624 12:26:59 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:14.624 12:26:59 -- common/autotest_common.sh@10 -- # set +x 00:06:14.624 12:26:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:14.624 12:26:59 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:06:14.624 12:26:59 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:14.624 12:26:59 -- spdk/autotest.sh@367 -- # [[ 1 -eq 1 ]] 00:06:14.624 12:26:59 -- spdk/autotest.sh@368 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:14.624 12:26:59 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:14.624 12:26:59 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.624 12:26:59 -- common/autotest_common.sh@10 -- # set +x 00:06:14.882 ************************************ 00:06:14.882 START TEST llvm_fuzz 00:06:14.882 ************************************ 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:14.882 * Looking for test storage... 00:06:14.882 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@547 -- # fuzzers=() 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@547 -- # local fuzzers 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@549 -- # [[ -n '' ]] 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@556 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:14.882 12:26:59 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.882 12:26:59 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:14.882 ************************************ 00:06:14.882 START TEST nvmf_fuzz 00:06:14.882 ************************************ 00:06:14.882 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:14.882 * Looking for test storage... 00:06:15.144 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:15.144 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:15.145 #define SPDK_CONFIG_H 00:06:15.145 #define SPDK_CONFIG_APPS 1 00:06:15.145 #define SPDK_CONFIG_ARCH native 00:06:15.145 #undef SPDK_CONFIG_ASAN 00:06:15.145 #undef SPDK_CONFIG_AVAHI 00:06:15.145 #undef SPDK_CONFIG_CET 00:06:15.145 #define SPDK_CONFIG_COVERAGE 1 00:06:15.145 #define SPDK_CONFIG_CROSS_PREFIX 00:06:15.145 #undef SPDK_CONFIG_CRYPTO 00:06:15.145 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:15.145 #undef SPDK_CONFIG_CUSTOMOCF 00:06:15.145 #undef SPDK_CONFIG_DAOS 00:06:15.145 #define SPDK_CONFIG_DAOS_DIR 00:06:15.145 #define SPDK_CONFIG_DEBUG 1 00:06:15.145 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:15.145 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:15.145 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:15.145 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:15.145 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:15.145 #undef SPDK_CONFIG_DPDK_UADK 00:06:15.145 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:15.145 #define SPDK_CONFIG_EXAMPLES 1 00:06:15.145 #undef SPDK_CONFIG_FC 00:06:15.145 #define SPDK_CONFIG_FC_PATH 00:06:15.145 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:15.145 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:15.145 #undef SPDK_CONFIG_FUSE 00:06:15.145 #define SPDK_CONFIG_FUZZER 1 00:06:15.145 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:15.145 #undef SPDK_CONFIG_GOLANG 00:06:15.145 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:15.145 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:15.145 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:15.145 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:15.145 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:15.145 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:15.145 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:15.145 #define SPDK_CONFIG_IDXD 1 00:06:15.145 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:15.145 #undef SPDK_CONFIG_IPSEC_MB 00:06:15.145 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:15.145 #define SPDK_CONFIG_ISAL 1 00:06:15.145 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:15.145 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:15.145 #define SPDK_CONFIG_LIBDIR 00:06:15.145 #undef SPDK_CONFIG_LTO 00:06:15.145 #define SPDK_CONFIG_MAX_LCORES 00:06:15.145 #define SPDK_CONFIG_NVME_CUSE 1 00:06:15.145 #undef SPDK_CONFIG_OCF 00:06:15.145 #define SPDK_CONFIG_OCF_PATH 00:06:15.145 #define SPDK_CONFIG_OPENSSL_PATH 00:06:15.145 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:15.145 #define SPDK_CONFIG_PGO_DIR 00:06:15.145 #undef SPDK_CONFIG_PGO_USE 00:06:15.145 #define SPDK_CONFIG_PREFIX /usr/local 00:06:15.145 #undef SPDK_CONFIG_RAID5F 00:06:15.145 #undef SPDK_CONFIG_RBD 00:06:15.145 #define SPDK_CONFIG_RDMA 1 00:06:15.145 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:15.145 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:15.145 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:15.145 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:15.145 #undef SPDK_CONFIG_SHARED 00:06:15.145 #undef SPDK_CONFIG_SMA 00:06:15.145 #define SPDK_CONFIG_TESTS 1 00:06:15.145 #undef SPDK_CONFIG_TSAN 00:06:15.145 #define SPDK_CONFIG_UBLK 1 00:06:15.145 #define SPDK_CONFIG_UBSAN 1 00:06:15.145 #undef SPDK_CONFIG_UNIT_TESTS 00:06:15.145 #undef SPDK_CONFIG_URING 00:06:15.145 #define SPDK_CONFIG_URING_PATH 00:06:15.145 #undef SPDK_CONFIG_URING_ZNS 00:06:15.145 #undef SPDK_CONFIG_USDT 00:06:15.145 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:15.145 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:15.145 #define SPDK_CONFIG_VFIO_USER 1 00:06:15.145 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:15.145 #define SPDK_CONFIG_VHOST 1 00:06:15.145 #define SPDK_CONFIG_VIRTIO 1 00:06:15.145 #undef SPDK_CONFIG_VTUNE 00:06:15.145 #define SPDK_CONFIG_VTUNE_DIR 00:06:15.145 #define SPDK_CONFIG_WERROR 1 00:06:15.145 #define SPDK_CONFIG_WPDK_DIR 00:06:15.145 #undef SPDK_CONFIG_XNVME 00:06:15.145 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:15.145 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # : 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # : 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # : 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # : 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # : 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:15.146 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2399388 ]] 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # kill -0 2399388 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.K2FXkL 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.K2FXkL/tests/nvmf /tmp/spdk.K2FXkL 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=968024064 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4316405760 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=52260868096 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=9481437184 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866440192 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:15.147 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342489088 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5971968 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869540864 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1613824 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:15.148 * Looking for test storage... 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # target_space=52260868096 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # new_size=11696029696 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:15.148 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # set -o errtrace 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1684 -- # true 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1686 -- # xtrace_fd 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:15.148 12:26:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:15.148 [2024-05-15 12:26:59.704951] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:15.148 [2024-05-15 12:26:59.705042] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399432 ] 00:06:15.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.407 [2024-05-15 12:26:59.885163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.407 [2024-05-15 12:26:59.950053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.407 [2024-05-15 12:27:00.009641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.665 [2024-05-15 12:27:00.025420] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:15.665 [2024-05-15 12:27:00.025842] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:15.665 INFO: Running with entropic power schedule (0xFF, 100). 00:06:15.665 INFO: Seed: 3144314224 00:06:15.665 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:15.665 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:15.665 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:15.665 INFO: A corpus is not provided, starting from an empty corpus 00:06:15.665 #2 INITED exec/s: 0 rss: 63Mb 00:06:15.665 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:15.665 This may also happen if the target rejected all inputs we tried so far 00:06:15.665 [2024-05-15 12:27:00.102011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:15.665 [2024-05-15 12:27:00.102051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.923 NEW_FUNC[1/685]: 0x481d20 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:15.923 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:15.923 #19 NEW cov: 11768 ft: 11771 corp: 2/118b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:15.923 [2024-05-15 12:27:00.443196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.923 [2024-05-15 12:27:00.443259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.923 NEW_FUNC[1/1]: 0x176cd60 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:15.923 #33 NEW cov: 11935 ft: 12829 corp: 3/202b lim: 320 exec/s: 0 rss: 70Mb L: 84/117 MS: 4 ShuffleBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:15.923 [2024-05-15 12:27:00.503051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.923 [2024-05-15 12:27:00.503084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.923 #34 NEW cov: 11941 ft: 13165 corp: 4/286b lim: 320 exec/s: 0 rss: 70Mb L: 84/117 MS: 1 ChangeBinInt- 00:06:16.181 [2024-05-15 12:27:00.563185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.182 [2024-05-15 12:27:00.563216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.182 #35 NEW cov: 12026 ft: 13419 corp: 5/403b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ShuffleBytes- 00:06:16.182 [2024-05-15 12:27:00.623387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.182 [2024-05-15 12:27:00.623418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.182 #36 NEW cov: 12026 ft: 13524 corp: 6/520b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ShuffleBytes- 00:06:16.182 [2024-05-15 12:27:00.673851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.182 [2024-05-15 12:27:00.673880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.182 [2024-05-15 12:27:00.674037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7d) qid:0 cid:5 nsid:7d7d7d7d cdw10:7d7d7d7d cdw11:7d7d7d7d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.182 [2024-05-15 12:27:00.674055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.182 #37 NEW cov: 12026 ft: 13843 corp: 7/711b lim: 320 exec/s: 0 rss: 70Mb L: 191/191 MS: 1 CrossOver- 00:06:16.182 [2024-05-15 12:27:00.723747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90000000 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.182 [2024-05-15 12:27:00.723774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.182 #38 NEW cov: 12026 ft: 13885 corp: 8/799b lim: 320 exec/s: 0 rss: 70Mb L: 88/191 MS: 1 CMP- DE: "\017\000\000\000"- 00:06:16.182 [2024-05-15 12:27:00.773848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.182 [2024-05-15 12:27:00.773877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.440 #39 NEW cov: 12026 ft: 13936 corp: 9/916b lim: 320 exec/s: 0 rss: 70Mb L: 117/191 MS: 1 ChangeBit- 00:06:16.440 [2024-05-15 12:27:00.823551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.440 [2024-05-15 12:27:00.823581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.440 #40 NEW cov: 12026 ft: 14063 corp: 10/1034b lim: 320 exec/s: 0 rss: 70Mb L: 118/191 MS: 1 InsertByte- 00:06:16.440 [2024-05-15 12:27:00.874103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.440 [2024-05-15 12:27:00.874132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.440 #41 NEW cov: 12026 ft: 14114 corp: 11/1156b lim: 320 exec/s: 0 rss: 70Mb L: 122/191 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:06:16.440 [2024-05-15 12:27:00.924444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.440 [2024-05-15 12:27:00.924473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.440 [2024-05-15 12:27:00.924591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:16.440 [2024-05-15 12:27:00.924608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.440 [2024-05-15 12:27:00.924720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:90909090 cdw10:90909090 cdw11:90909090 00:06:16.440 [2024-05-15 12:27:00.924736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.440 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:16.440 #42 NEW cov: 12051 ft: 14281 corp: 12/1349b lim: 320 exec/s: 0 rss: 70Mb L: 193/193 MS: 1 InsertRepeatedBytes- 00:06:16.440 [2024-05-15 12:27:00.974022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.440 [2024-05-15 12:27:00.974055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.441 #48 NEW cov: 12051 ft: 14334 corp: 13/1466b lim: 320 exec/s: 0 rss: 71Mb L: 117/193 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:06:16.441 [2024-05-15 12:27:01.034787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.441 [2024-05-15 12:27:01.034817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.441 [2024-05-15 12:27:01.034938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:7d7d7d7d cdw11:7d7d7d7d 00:06:16.441 [2024-05-15 12:27:01.034958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.700 #49 NEW cov: 12051 ft: 14367 corp: 14/1638b lim: 320 exec/s: 0 rss: 71Mb L: 172/193 MS: 1 InsertRepeatedBytes- 00:06:16.700 [2024-05-15 12:27:01.084784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff7d7d 00:06:16.700 [2024-05-15 12:27:01.084815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.700 [2024-05-15 12:27:01.084961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:16.700 [2024-05-15 12:27:01.084980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.700 [2024-05-15 12:27:01.085112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7d) qid:0 cid:6 nsid:7d7d7d7d cdw10:7d287d7d cdw11:7d7d7d7d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.700 [2024-05-15 12:27:01.085130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.700 NEW_FUNC[1/1]: 0x13468e0 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2038 00:06:16.700 #50 NEW cov: 12082 ft: 14525 corp: 15/1850b lim: 320 exec/s: 50 rss: 71Mb L: 212/212 MS: 1 InsertRepeatedBytes- 00:06:16.700 [2024-05-15 12:27:01.145197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.700 [2024-05-15 12:27:01.145228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.700 [2024-05-15 12:27:01.145350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:7d7d7d7d cdw11:7d7d7d7d 00:06:16.700 [2024-05-15 12:27:01.145367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.700 #51 NEW cov: 12082 ft: 14579 corp: 16/2022b lim: 320 exec/s: 51 rss: 71Mb L: 172/212 MS: 1 ChangeByte- 00:06:16.700 [2024-05-15 12:27:01.205009] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.700 [2024-05-15 12:27:01.205037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.700 #52 NEW cov: 12082 ft: 14587 corp: 17/2145b lim: 320 exec/s: 52 rss: 71Mb L: 123/212 MS: 1 InsertByte- 00:06:16.700 [2024-05-15 12:27:01.255170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.700 [2024-05-15 12:27:01.255199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.700 #53 NEW cov: 12082 ft: 14614 corp: 18/2262b lim: 320 exec/s: 53 rss: 71Mb L: 117/212 MS: 1 ChangeBinInt- 00:06:16.700 [2024-05-15 12:27:01.315438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90000000 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.700 [2024-05-15 12:27:01.315467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.958 #54 NEW cov: 12082 ft: 14636 corp: 19/2345b lim: 320 exec/s: 54 rss: 71Mb L: 83/212 MS: 1 EraseBytes- 00:06:16.958 [2024-05-15 12:27:01.365528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:16.958 [2024-05-15 12:27:01.365556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.958 #55 NEW cov: 12082 ft: 14655 corp: 20/2468b lim: 320 exec/s: 55 rss: 71Mb L: 123/212 MS: 1 CopyPart- 00:06:16.958 [2024-05-15 12:27:01.425727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.958 [2024-05-15 12:27:01.425758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.958 #56 NEW cov: 12082 ft: 14676 corp: 21/2552b lim: 320 exec/s: 56 rss: 71Mb L: 84/212 MS: 1 ChangeBit- 00:06:16.958 [2024-05-15 12:27:01.475868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90000000 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.958 [2024-05-15 12:27:01.475898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.958 #57 NEW cov: 12082 ft: 14690 corp: 22/2639b lim: 320 exec/s: 57 rss: 71Mb L: 87/212 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:06:16.958 [2024-05-15 12:27:01.536479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff7d7d 00:06:16.958 [2024-05-15 12:27:01.536510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.958 [2024-05-15 12:27:01.536646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:16.958 [2024-05-15 12:27:01.536664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.958 [2024-05-15 12:27:01.536802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7d) qid:0 cid:6 nsid:7d7d7d7d cdw10:7d287d7d cdw11:7d7d7d7d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.958 [2024-05-15 12:27:01.536818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.958 #58 NEW cov: 12082 ft: 14699 corp: 23/2851b lim: 320 exec/s: 58 rss: 71Mb L: 212/212 MS: 1 ChangeByte- 00:06:17.216 [2024-05-15 12:27:01.596314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.216 [2024-05-15 12:27:01.596345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.216 #59 NEW cov: 12082 ft: 14710 corp: 24/2974b lim: 320 exec/s: 59 rss: 71Mb L: 123/212 MS: 1 CopyPart- 00:06:17.216 [2024-05-15 12:27:01.656563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.216 [2024-05-15 12:27:01.656594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.216 #60 NEW cov: 12082 ft: 14714 corp: 25/3058b lim: 320 exec/s: 60 rss: 72Mb L: 84/212 MS: 1 CrossOver- 00:06:17.216 [2024-05-15 12:27:01.716735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.216 [2024-05-15 12:27:01.716765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.216 #61 NEW cov: 12082 ft: 14730 corp: 26/3181b lim: 320 exec/s: 61 rss: 72Mb L: 123/212 MS: 1 ChangeBinInt- 00:06:17.216 [2024-05-15 12:27:01.766888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90000000 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.216 [2024-05-15 12:27:01.766917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.216 #62 NEW cov: 12082 ft: 14742 corp: 27/3264b lim: 320 exec/s: 62 rss: 72Mb L: 83/212 MS: 1 ChangeBinInt- 00:06:17.216 [2024-05-15 12:27:01.817003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.216 [2024-05-15 12:27:01.817033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.475 #63 NEW cov: 12082 ft: 14812 corp: 28/3381b lim: 320 exec/s: 63 rss: 72Mb L: 117/212 MS: 1 CMP- DE: "\001\000\000\000\002-'\215"- 00:06:17.475 [2024-05-15 12:27:01.867209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.475 [2024-05-15 12:27:01.867238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.475 #64 NEW cov: 12082 ft: 14830 corp: 29/3498b lim: 320 exec/s: 64 rss: 72Mb L: 117/212 MS: 1 ChangeByte- 00:06:17.475 [2024-05-15 12:27:01.927372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.475 [2024-05-15 12:27:01.927409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.475 #65 NEW cov: 12082 ft: 14844 corp: 30/3623b lim: 320 exec/s: 65 rss: 72Mb L: 125/212 MS: 1 PersAutoDict- DE: "\001\000\000\000\002-'\215"- 00:06:17.475 [2024-05-15 12:27:01.977574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (31) qid:0 cid:4 nsid:90909090 cdw10:90909090 cdw11:90909090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.475 [2024-05-15 12:27:01.977603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.475 #71 NEW cov: 12082 ft: 14871 corp: 31/3707b lim: 320 exec/s: 71 rss: 72Mb L: 84/212 MS: 1 ShuffleBytes- 00:06:17.475 [2024-05-15 12:27:02.037980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7d7d7d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.475 [2024-05-15 12:27:02.038012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.475 [2024-05-15 12:27:02.038138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7d) qid:0 cid:5 nsid:7d7d7d7d cdw10:7d7d7d7d cdw11:5d7d7d7d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.475 [2024-05-15 12:27:02.038156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.475 #72 NEW cov: 12082 ft: 14876 corp: 32/3898b lim: 320 exec/s: 72 rss: 72Mb L: 191/212 MS: 1 CopyPart- 00:06:17.475 [2024-05-15 12:27:02.088156] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7d7d7d7d7d7d7d7d 00:06:17.475 [2024-05-15 12:27:02.088183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.475 [2024-05-15 12:27:02.088309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7d) qid:0 cid:5 nsid:7d7d7d7d cdw10:287d7d7d cdw11:7d7d7d7d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.475 [2024-05-15 12:27:02.088325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.734 #73 NEW cov: 12082 ft: 14890 corp: 33/4047b lim: 320 exec/s: 36 rss: 72Mb L: 149/212 MS: 1 EraseBytes- 00:06:17.734 #73 DONE cov: 12082 ft: 14890 corp: 33/4047b lim: 320 exec/s: 36 rss: 72Mb 00:06:17.734 ###### Recommended dictionary. ###### 00:06:17.734 "\017\000\000\000" # Uses: 3 00:06:17.734 "\001\000\000\000\002-'\215" # Uses: 1 00:06:17.734 ###### End of recommended dictionary. ###### 00:06:17.734 Done 73 runs in 2 second(s) 00:06:17.734 [2024-05-15 12:27:02.120233] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:17.734 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:17.735 12:27:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:17.735 [2024-05-15 12:27:02.281175] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:17.735 [2024-05-15 12:27:02.281259] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400093 ] 00:06:17.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.993 [2024-05-15 12:27:02.460506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.993 [2024-05-15 12:27:02.527001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.993 [2024-05-15 12:27:02.586687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.993 [2024-05-15 12:27:02.602637] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:17.993 [2024-05-15 12:27:02.603011] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:18.251 INFO: Running with entropic power schedule (0xFF, 100). 00:06:18.251 INFO: Seed: 1423364384 00:06:18.251 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:18.251 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:18.251 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:18.251 INFO: A corpus is not provided, starting from an empty corpus 00:06:18.251 #2 INITED exec/s: 0 rss: 63Mb 00:06:18.251 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:18.251 This may also happen if the target rejected all inputs we tried so far 00:06:18.251 [2024-05-15 12:27:02.671363] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.251 [2024-05-15 12:27:02.671540] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.251 [2024-05-15 12:27:02.671678] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.251 [2024-05-15 12:27:02.671822] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.251 [2024-05-15 12:27:02.672145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.251 [2024-05-15 12:27:02.672177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.251 [2024-05-15 12:27:02.672285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.251 [2024-05-15 12:27:02.672302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.251 [2024-05-15 12:27:02.672414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.251 [2024-05-15 12:27:02.672431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.251 [2024-05-15 12:27:02.672561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.251 [2024-05-15 12:27:02.672580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.509 NEW_FUNC[1/686]: 0x482620 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:18.509 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:18.509 #22 NEW cov: 11851 ft: 11842 corp: 2/25b lim: 30 exec/s: 0 rss: 70Mb L: 24/24 MS: 5 InsertByte-ShuffleBytes-EraseBytes-ChangeByte-InsertRepeatedBytes- 00:06:18.509 [2024-05-15 12:27:03.012485] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.012654] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.012802] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.013147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.013190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.509 [2024-05-15 12:27:03.013315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.013334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.509 [2024-05-15 12:27:03.013449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.013471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.509 #23 NEW cov: 11983 ft: 13021 corp: 3/43b lim: 30 exec/s: 0 rss: 70Mb L: 18/24 MS: 1 CrossOver- 00:06:18.509 [2024-05-15 12:27:03.062156] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.062322] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.062484] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.062639] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.509 [2024-05-15 12:27:03.062971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.063001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.509 [2024-05-15 12:27:03.063121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.063140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.509 [2024-05-15 12:27:03.063267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.063286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.509 [2024-05-15 12:27:03.063410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.509 [2024-05-15 12:27:03.063427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.510 #24 NEW cov: 11989 ft: 13238 corp: 4/68b lim: 30 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 CopyPart- 00:06:18.510 [2024-05-15 12:27:03.112725] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.510 [2024-05-15 12:27:03.112895] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.510 [2024-05-15 12:27:03.113052] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.510 [2024-05-15 12:27:03.113212] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.510 [2024-05-15 12:27:03.113569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.510 [2024-05-15 12:27:03.113598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.510 [2024-05-15 12:27:03.113715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.510 [2024-05-15 12:27:03.113732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.510 [2024-05-15 12:27:03.113850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f8341 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.510 [2024-05-15 12:27:03.113870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.510 [2024-05-15 12:27:03.113990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.510 [2024-05-15 12:27:03.114008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.766 #25 NEW cov: 12074 ft: 13492 corp: 5/93b lim: 30 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 InsertByte- 00:06:18.766 [2024-05-15 12:27:03.152708] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.152869] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.153033] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.153179] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.153533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.153563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.766 [2024-05-15 12:27:03.153678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.153696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.766 [2024-05-15 12:27:03.153810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f8341 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.153829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.766 [2024-05-15 12:27:03.153953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.153970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.766 #31 NEW cov: 12074 ft: 13626 corp: 6/118b lim: 30 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 CrossOver- 00:06:18.766 [2024-05-15 12:27:03.202674] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.203040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.203070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.766 #34 NEW cov: 12074 ft: 14089 corp: 7/127b lim: 30 exec/s: 0 rss: 70Mb L: 9/25 MS: 3 ShuffleBytes-CopyPart-CrossOver- 00:06:18.766 [2024-05-15 12:27:03.242670] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.243014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.243044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.766 #35 NEW cov: 12074 ft: 14143 corp: 8/135b lim: 30 exec/s: 0 rss: 70Mb L: 8/25 MS: 1 EraseBytes- 00:06:18.766 [2024-05-15 12:27:03.292937] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:18.766 [2024-05-15 12:27:03.293305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.293333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.766 #36 NEW cov: 12074 ft: 14183 corp: 9/145b lim: 30 exec/s: 0 rss: 70Mb L: 10/25 MS: 1 InsertByte- 00:06:18.766 [2024-05-15 12:27:03.333075] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f8a 00:06:18.766 [2024-05-15 12:27:03.333443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.333473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.766 #37 NEW cov: 12074 ft: 14253 corp: 10/154b lim: 30 exec/s: 0 rss: 70Mb L: 9/25 MS: 1 ChangeBinInt- 00:06:18.766 [2024-05-15 12:27:03.372821] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002a6f 00:06:18.766 [2024-05-15 12:27:03.373165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.766 [2024-05-15 12:27:03.373193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.022 #38 NEW cov: 12074 ft: 14363 corp: 11/163b lim: 30 exec/s: 0 rss: 70Mb L: 9/25 MS: 1 CopyPart- 00:06:19.022 [2024-05-15 12:27:03.423239] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005b6f 00:06:19.022 [2024-05-15 12:27:03.423584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a025b cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.022 [2024-05-15 12:27:03.423615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.022 #39 NEW cov: 12074 ft: 14377 corp: 12/173b lim: 30 exec/s: 0 rss: 70Mb L: 10/25 MS: 1 CopyPart- 00:06:19.022 [2024-05-15 12:27:03.473603] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.022 [2024-05-15 12:27:03.473783] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x126f 00:06:19.022 [2024-05-15 12:27:03.473940] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.022 [2024-05-15 12:27:03.474269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.022 [2024-05-15 12:27:03.474298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.022 [2024-05-15 12:27:03.474426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f006f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.022 [2024-05-15 12:27:03.474448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.023 [2024-05-15 12:27:03.474562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.474581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.023 #40 NEW cov: 12074 ft: 14416 corp: 13/191b lim: 30 exec/s: 0 rss: 70Mb L: 18/25 MS: 1 ChangeBinInt- 00:06:19.023 [2024-05-15 12:27:03.523672] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796844) > buf size (4096) 00:06:19.023 [2024-05-15 12:27:03.523826] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.023 [2024-05-15 12:27:03.524171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.524200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.023 [2024-05-15 12:27:03.524320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.524336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.023 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:19.023 #41 NEW cov: 12120 ft: 14738 corp: 14/203b lim: 30 exec/s: 0 rss: 70Mb L: 12/25 MS: 1 CMP- DE: "\010\000\000\000"- 00:06:19.023 [2024-05-15 12:27:03.573931] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (217940) > buf size (4096) 00:06:19.023 [2024-05-15 12:27:03.574099] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (217940) > buf size (4096) 00:06:19.023 [2024-05-15 12:27:03.574258] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796844) > buf size (4096) 00:06:19.023 [2024-05-15 12:27:03.574411] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.023 [2024-05-15 12:27:03.574765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d4d400d4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.574804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.023 [2024-05-15 12:27:03.574931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d4d400d4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.574951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.023 [2024-05-15 12:27:03.575070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.575088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.023 [2024-05-15 12:27:03.575207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.575225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.023 #42 NEW cov: 12120 ft: 14776 corp: 15/227b lim: 30 exec/s: 0 rss: 70Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:06:19.023 [2024-05-15 12:27:03.623916] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xcb0a 00:06:19.023 [2024-05-15 12:27:03.624298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.023 [2024-05-15 12:27:03.624327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.280 #45 NEW cov: 12120 ft: 14814 corp: 16/233b lim: 30 exec/s: 45 rss: 70Mb L: 6/25 MS: 3 ShuffleBytes-InsertByte-PersAutoDict- DE: "\010\000\000\000"- 00:06:19.280 [2024-05-15 12:27:03.664309] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:19.280 [2024-05-15 12:27:03.664632] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:06:19.280 [2024-05-15 12:27:03.664794] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.280 [2024-05-15 12:27:03.665135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.665165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.280 [2024-05-15 12:27:03.665290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.665308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.280 [2024-05-15 12:27:03.665426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000832a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.665444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.280 [2024-05-15 12:27:03.665559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.665575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.280 #46 NEW cov: 12137 ft: 14869 corp: 17/258b lim: 30 exec/s: 46 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:19.280 [2024-05-15 12:27:03.704348] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.280 [2024-05-15 12:27:03.704537] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.280 [2024-05-15 12:27:03.704704] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006f6f 00:06:19.280 [2024-05-15 12:27:03.705073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.705102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.280 [2024-05-15 12:27:03.705216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.705235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.280 [2024-05-15 12:27:03.705354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f820282 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.705373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.280 #47 NEW cov: 12137 ft: 14903 corp: 18/279b lim: 30 exec/s: 47 rss: 70Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:06:19.280 [2024-05-15 12:27:03.744284] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002a6f 00:06:19.280 [2024-05-15 12:27:03.744660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a026f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.744688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.280 #48 NEW cov: 12137 ft: 14915 corp: 19/288b lim: 30 exec/s: 48 rss: 70Mb L: 9/25 MS: 1 ChangeByte- 00:06:19.280 [2024-05-15 12:27:03.793957] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (252760) > buf size (4096) 00:06:19.280 [2024-05-15 12:27:03.794315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:f6d50090 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.794346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.280 #49 NEW cov: 12137 ft: 14943 corp: 20/296b lim: 30 exec/s: 49 rss: 70Mb L: 8/25 MS: 1 ChangeBinInt- 00:06:19.280 [2024-05-15 12:27:03.834472] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xcb0a 00:06:19.280 [2024-05-15 12:27:03.834843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:08fa0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.280 [2024-05-15 12:27:03.834874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.281 #50 NEW cov: 12137 ft: 14963 corp: 21/302b lim: 30 exec/s: 50 rss: 70Mb L: 6/25 MS: 1 ChangeBinInt- 00:06:19.281 [2024-05-15 12:27:03.884663] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002a6f 00:06:19.281 [2024-05-15 12:27:03.885000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.281 [2024-05-15 12:27:03.885029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.538 #51 NEW cov: 12137 ft: 14974 corp: 22/311b lim: 30 exec/s: 51 rss: 71Mb L: 9/25 MS: 1 ChangeByte- 00:06:19.538 [2024-05-15 12:27:03.924814] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002a6f 00:06:19.538 [2024-05-15 12:27:03.925000] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002a6f 00:06:19.538 [2024-05-15 12:27:03.925320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:03.925350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.538 [2024-05-15 12:27:03.925467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:03.925489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.538 #52 NEW cov: 12137 ft: 14998 corp: 23/326b lim: 30 exec/s: 52 rss: 71Mb L: 15/25 MS: 1 CopyPart- 00:06:19.538 [2024-05-15 12:27:03.964924] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.538 [2024-05-15 12:27:03.965279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a835b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:03.965307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.538 #53 NEW cov: 12137 ft: 15003 corp: 24/337b lim: 30 exec/s: 53 rss: 71Mb L: 11/25 MS: 1 InsertByte- 00:06:19.538 [2024-05-15 12:27:04.004982] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.538 [2024-05-15 12:27:04.005146] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.538 [2024-05-15 12:27:04.005308] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008282 00:06:19.538 [2024-05-15 12:27:04.005457] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.538 [2024-05-15 12:27:04.005813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.005843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.538 [2024-05-15 12:27:04.005964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.005980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.538 [2024-05-15 12:27:04.006101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f0241 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.006120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.538 [2024-05-15 12:27:04.006247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:826f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.006265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.538 #54 NEW cov: 12137 ft: 15009 corp: 25/366b lim: 30 exec/s: 54 rss: 71Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:19.538 [2024-05-15 12:27:04.045180] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10412) > buf size (4096) 00:06:19.538 [2024-05-15 12:27:04.045330] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2a 00:06:19.538 [2024-05-15 12:27:04.045495] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.538 [2024-05-15 12:27:04.045847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a005b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.045878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.538 [2024-05-15 12:27:04.045994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.046011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.538 [2024-05-15 12:27:04.046130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5b6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.046150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.538 #55 NEW cov: 12137 ft: 15036 corp: 26/384b lim: 30 exec/s: 55 rss: 71Mb L: 18/29 MS: 1 CMP- DE: "\200\000\000\000\000\000\000\000"- 00:06:19.538 [2024-05-15 12:27:04.095239] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xcb0a 00:06:19.538 [2024-05-15 12:27:04.095599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:f8fc0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.095629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.538 #56 NEW cov: 12137 ft: 15117 corp: 27/390b lim: 30 exec/s: 56 rss: 71Mb L: 6/29 MS: 1 ChangeBinInt- 00:06:19.538 [2024-05-15 12:27:04.135354] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000246f 00:06:19.538 [2024-05-15 12:27:04.135720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a026f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.538 [2024-05-15 12:27:04.135749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.796 #57 NEW cov: 12137 ft: 15148 corp: 28/399b lim: 30 exec/s: 57 rss: 71Mb L: 9/29 MS: 1 ChangeBinInt- 00:06:19.796 [2024-05-15 12:27:04.185710] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200009a9a 00:06:19.796 [2024-05-15 12:27:04.185900] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005b6f 00:06:19.796 [2024-05-15 12:27:04.186047] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.796 [2024-05-15 12:27:04.186392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9a9a029a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.796 [2024-05-15 12:27:04.186420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.796 [2024-05-15 12:27:04.186536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9a9a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.796 [2024-05-15 12:27:04.186553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.796 [2024-05-15 12:27:04.186675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f835d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.796 [2024-05-15 12:27:04.186694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.796 #58 NEW cov: 12137 ft: 15213 corp: 29/418b lim: 30 exec/s: 58 rss: 71Mb L: 19/29 MS: 1 InsertRepeatedBytes- 00:06:19.796 [2024-05-15 12:27:04.235777] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796844) > buf size (4096) 00:06:19.796 [2024-05-15 12:27:04.235943] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ef6f 00:06:19.796 [2024-05-15 12:27:04.236283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.796 [2024-05-15 12:27:04.236315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.796 [2024-05-15 12:27:04.236432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.236450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.797 #59 NEW cov: 12137 ft: 15219 corp: 30/430b lim: 30 exec/s: 59 rss: 71Mb L: 12/29 MS: 1 ChangeBit- 00:06:19.797 [2024-05-15 12:27:04.275877] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (263708) > buf size (4096) 00:06:19.797 [2024-05-15 12:27:04.276030] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (22204) > buf size (4096) 00:06:19.797 [2024-05-15 12:27:04.276371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01868107 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.276405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.797 [2024-05-15 12:27:04.276513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:15ae00f8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.276531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.797 #60 NEW cov: 12137 ft: 15238 corp: 31/444b lim: 30 exec/s: 60 rss: 71Mb L: 14/29 MS: 1 CMP- DE: "\001\206\007I\241`\025\256"- 00:06:19.797 [2024-05-15 12:27:04.326095] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200009a9a 00:06:19.797 [2024-05-15 12:27:04.326270] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005b6f 00:06:19.797 [2024-05-15 12:27:04.326443] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:19.797 [2024-05-15 12:27:04.326795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9a9a029a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.326824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.797 [2024-05-15 12:27:04.326944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9a9a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.326964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.797 [2024-05-15 12:27:04.327078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f2f835d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.327094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.797 #61 NEW cov: 12137 ft: 15346 corp: 32/463b lim: 30 exec/s: 61 rss: 71Mb L: 19/29 MS: 1 ChangeBit- 00:06:19.797 [2024-05-15 12:27:04.376017] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (535552) > buf size (4096) 00:06:19.797 [2024-05-15 12:27:04.376404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.797 [2024-05-15 12:27:04.376434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.797 #62 NEW cov: 12137 ft: 15352 corp: 33/472b lim: 30 exec/s: 62 rss: 71Mb L: 9/29 MS: 1 CMP- DE: "\377\377~%$\020V\351"- 00:06:20.054 [2024-05-15 12:27:04.416279] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.054 [2024-05-15 12:27:04.416445] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.416606] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (900544) > buf size (4096) 00:06:20.055 [2024-05-15 12:27:04.416762] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.417101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.417130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.417250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.417269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.417398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f8341 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.417418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.417526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.417543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.055 #63 NEW cov: 12137 ft: 15377 corp: 34/501b lim: 30 exec/s: 63 rss: 72Mb L: 29/29 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:06:20.055 [2024-05-15 12:27:04.456461] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10412) > buf size (4096) 00:06:20.055 [2024-05-15 12:27:04.456631] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2a 00:06:20.055 [2024-05-15 12:27:04.456791] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f7e 00:06:20.055 [2024-05-15 12:27:04.457123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a005b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.457151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.457263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.457283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.457411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5b6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.457431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.055 #64 NEW cov: 12137 ft: 15415 corp: 35/519b lim: 30 exec/s: 64 rss: 72Mb L: 18/29 MS: 1 ChangeByte- 00:06:20.055 [2024-05-15 12:27:04.506582] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.506754] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.506915] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.507081] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.507418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.507447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.507566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.507585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.507700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.507720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.507839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.507858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.055 #65 NEW cov: 12137 ft: 15465 corp: 36/544b lim: 30 exec/s: 65 rss: 72Mb L: 25/29 MS: 1 ShuffleBytes- 00:06:20.055 [2024-05-15 12:27:04.556558] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.556722] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002a6f 00:06:20.055 [2024-05-15 12:27:04.557072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2a836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.557101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.557209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8a6f832a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.557227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.055 #66 NEW cov: 12137 ft: 15483 corp: 37/559b lim: 30 exec/s: 66 rss: 72Mb L: 15/29 MS: 1 ShuffleBytes- 00:06:20.055 [2024-05-15 12:27:04.606920] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.607085] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.607247] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (900544) > buf size (4096) 00:06:20.055 [2024-05-15 12:27:04.607417] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.607753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.607788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.607900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.607920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.608043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.608062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.608183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0000836f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.608201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.055 #67 NEW cov: 12137 ft: 15545 corp: 38/583b lim: 30 exec/s: 67 rss: 72Mb L: 24/29 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:20.055 [2024-05-15 12:27:04.646795] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006665 00:06:20.055 [2024-05-15 12:27:04.646961] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000a496 00:06:20.055 [2024-05-15 12:27:04.647109] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006f6f 00:06:20.055 [2024-05-15 12:27:04.647457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9a9a029a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.647486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.647615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:656581f5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.647634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.055 [2024-05-15 12:27:04.647753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6f6f835d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.055 [2024-05-15 12:27:04.647772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.055 #68 NEW cov: 12137 ft: 15549 corp: 39/602b lim: 30 exec/s: 34 rss: 72Mb L: 19/29 MS: 1 ChangeBinInt- 00:06:20.055 #68 DONE cov: 12137 ft: 15549 corp: 39/602b lim: 30 exec/s: 34 rss: 72Mb 00:06:20.055 ###### Recommended dictionary. ###### 00:06:20.055 "\010\000\000\000" # Uses: 2 00:06:20.055 "\200\000\000\000\000\000\000\000" # Uses: 0 00:06:20.055 "\001\206\007I\241`\025\256" # Uses: 0 00:06:20.055 "\377\377~%$\020V\351" # Uses: 0 00:06:20.055 "\000\000\000\000" # Uses: 0 00:06:20.055 ###### End of recommended dictionary. ###### 00:06:20.055 Done 68 runs in 2 second(s) 00:06:20.055 [2024-05-15 12:27:04.670509] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:20.314 12:27:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:20.314 [2024-05-15 12:27:04.816324] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:20.314 [2024-05-15 12:27:04.816386] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400828 ] 00:06:20.314 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.572 [2024-05-15 12:27:04.998968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.572 [2024-05-15 12:27:05.067483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.572 [2024-05-15 12:27:05.126877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.572 [2024-05-15 12:27:05.142824] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:20.572 [2024-05-15 12:27:05.143241] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:20.572 INFO: Running with entropic power schedule (0xFF, 100). 00:06:20.572 INFO: Seed: 3964328537 00:06:20.572 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:20.572 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:20.572 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:20.572 INFO: A corpus is not provided, starting from an empty corpus 00:06:20.572 #2 INITED exec/s: 0 rss: 63Mb 00:06:20.572 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:20.572 This may also happen if the target rejected all inputs we tried so far 00:06:20.828 [2024-05-15 12:27:05.191849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.829 [2024-05-15 12:27:05.191878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.086 NEW_FUNC[1/685]: 0x4850d0 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:21.086 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:21.086 #8 NEW cov: 11809 ft: 11810 corp: 2/10b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\177"- 00:06:21.086 [2024-05-15 12:27:05.522598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.086 [2024-05-15 12:27:05.522633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.086 #10 NEW cov: 11939 ft: 12496 corp: 3/20b lim: 35 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 CopyPart-CrossOver- 00:06:21.086 [2024-05-15 12:27:05.562605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5d5d00ff cdw11:5d005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.086 [2024-05-15 12:27:05.562631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.086 #13 NEW cov: 11945 ft: 12741 corp: 4/27b lim: 35 exec/s: 0 rss: 70Mb L: 7/10 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:06:21.086 [2024-05-15 12:27:05.602725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.086 [2024-05-15 12:27:05.602751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.086 #14 NEW cov: 12030 ft: 13076 corp: 5/37b lim: 35 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\177"- 00:06:21.086 [2024-05-15 12:27:05.652869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff000a cdw11:ff000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.086 [2024-05-15 12:27:05.652894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.086 #15 NEW cov: 12030 ft: 13135 corp: 6/47b lim: 35 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:21.086 [2024-05-15 12:27:05.693000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.086 [2024-05-15 12:27:05.693025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.343 #16 NEW cov: 12030 ft: 13202 corp: 7/56b lim: 35 exec/s: 0 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:21.343 [2024-05-15 12:27:05.743213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.743239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.343 [2024-05-15 12:27:05.743292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:5d00dc5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.743306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.343 #20 NEW cov: 12030 ft: 13577 corp: 8/70b lim: 35 exec/s: 0 rss: 70Mb L: 14/14 MS: 4 EraseBytes-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:06:21.343 [2024-05-15 12:27:05.793077] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:21.343 [2024-05-15 12:27:05.793316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:075d0000 cdw11:5d005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.793343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.343 #21 NEW cov: 12039 ft: 13644 corp: 9/77b lim: 35 exec/s: 0 rss: 70Mb L: 7/14 MS: 1 ChangeBinInt- 00:06:21.343 [2024-05-15 12:27:05.843388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.843414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.343 #22 NEW cov: 12039 ft: 13694 corp: 10/87b lim: 35 exec/s: 0 rss: 70Mb L: 10/14 MS: 1 ShuffleBytes- 00:06:21.343 [2024-05-15 12:27:05.893656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.893683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.343 [2024-05-15 12:27:05.893740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.893754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.343 #23 NEW cov: 12039 ft: 13737 corp: 11/105b lim: 35 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\177"- 00:06:21.343 [2024-05-15 12:27:05.943698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.343 [2024-05-15 12:27:05.943724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 #24 NEW cov: 12039 ft: 13782 corp: 12/118b lim: 35 exec/s: 0 rss: 70Mb L: 13/18 MS: 1 EraseBytes- 00:06:21.600 [2024-05-15 12:27:05.993809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:05.993835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 #25 NEW cov: 12039 ft: 13816 corp: 13/127b lim: 35 exec/s: 0 rss: 70Mb L: 9/18 MS: 1 ShuffleBytes- 00:06:21.600 [2024-05-15 12:27:06.034090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.034117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 [2024-05-15 12:27:06.034173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:5d00dc5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.034187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.600 #26 NEW cov: 12039 ft: 13849 corp: 14/141b lim: 35 exec/s: 0 rss: 70Mb L: 14/18 MS: 1 ChangeBit- 00:06:21.600 [2024-05-15 12:27:06.084194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.084219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 [2024-05-15 12:27:06.084275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff2100ff cdw11:ff007fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.084289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.600 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:21.600 #27 NEW cov: 12062 ft: 13883 corp: 15/155b lim: 35 exec/s: 0 rss: 71Mb L: 14/18 MS: 1 InsertByte- 00:06:21.600 [2024-05-15 12:27:06.134202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5d5d00ff cdw11:5d005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.134227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 #28 NEW cov: 12062 ft: 13892 corp: 16/164b lim: 35 exec/s: 0 rss: 71Mb L: 9/18 MS: 1 CopyPart- 00:06:21.600 [2024-05-15 12:27:06.174417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:dc00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.174442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 [2024-05-15 12:27:06.174494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00ff cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.174512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.600 #30 NEW cov: 12062 ft: 13904 corp: 17/183b lim: 35 exec/s: 30 rss: 71Mb L: 19/19 MS: 2 EraseBytes-CrossOver- 00:06:21.600 [2024-05-15 12:27:06.214559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5d5d00ff cdw11:93005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.214585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.600 [2024-05-15 12:27:06.214640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:93930093 cdw11:93009393 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.600 [2024-05-15 12:27:06.214654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.857 #31 NEW cov: 12062 ft: 13911 corp: 18/201b lim: 35 exec/s: 31 rss: 71Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:06:21.857 [2024-05-15 12:27:06.264858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5d5d00ff cdw11:93005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.857 [2024-05-15 12:27:06.264884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.857 [2024-05-15 12:27:06.264940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:93930093 cdw11:93009393 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.857 [2024-05-15 12:27:06.264954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.857 [2024-05-15 12:27:06.265010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:5d5d005d cdw11:ff005dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.857 [2024-05-15 12:27:06.265024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.857 #32 NEW cov: 12062 ft: 14177 corp: 19/227b lim: 35 exec/s: 32 rss: 71Mb L: 26/26 MS: 1 CMP- DE: "\377\377\377\377\377\377\377>"- 00:06:21.857 [2024-05-15 12:27:06.314733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff008a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.857 [2024-05-15 12:27:06.314760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.857 #33 NEW cov: 12062 ft: 14210 corp: 20/236b lim: 35 exec/s: 33 rss: 71Mb L: 9/26 MS: 1 ChangeBit- 00:06:21.857 [2024-05-15 12:27:06.354775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.858 [2024-05-15 12:27:06.354801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.858 #34 NEW cov: 12062 ft: 14232 corp: 21/249b lim: 35 exec/s: 34 rss: 71Mb L: 13/26 MS: 1 ChangeBit- 00:06:21.858 [2024-05-15 12:27:06.395036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:dc00dcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.858 [2024-05-15 12:27:06.395061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.858 [2024-05-15 12:27:06.395118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:5d00dc5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.858 [2024-05-15 12:27:06.395131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.858 #35 NEW cov: 12062 ft: 14312 corp: 22/263b lim: 35 exec/s: 35 rss: 71Mb L: 14/26 MS: 1 ChangeBit- 00:06:21.858 [2024-05-15 12:27:06.445298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:5d005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.858 [2024-05-15 12:27:06.445328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.858 [2024-05-15 12:27:06.445399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:93930093 cdw11:ff0093ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.858 [2024-05-15 12:27:06.445414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.858 [2024-05-15 12:27:06.445467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00dc cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.858 [2024-05-15 12:27:06.445481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.115 #36 NEW cov: 12062 ft: 14339 corp: 23/287b lim: 35 exec/s: 36 rss: 71Mb L: 24/26 MS: 1 CrossOver- 00:06:22.115 [2024-05-15 12:27:06.495172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.115 [2024-05-15 12:27:06.495197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.115 #37 NEW cov: 12062 ft: 14342 corp: 24/300b lim: 35 exec/s: 37 rss: 71Mb L: 13/26 MS: 1 ShuffleBytes- 00:06:22.115 [2024-05-15 12:27:06.525261] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:22.115 [2024-05-15 12:27:06.525388] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:22.115 [2024-05-15 12:27:06.525693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.115 [2024-05-15 12:27:06.525718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.115 [2024-05-15 12:27:06.525773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.115 [2024-05-15 12:27:06.525789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.115 [2024-05-15 12:27:06.525844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.115 [2024-05-15 12:27:06.525859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.115 [2024-05-15 12:27:06.525912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.115 [2024-05-15 12:27:06.525925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.115 #38 NEW cov: 12062 ft: 14844 corp: 25/328b lim: 35 exec/s: 38 rss: 71Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:22.115 [2024-05-15 12:27:06.565396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.115 [2024-05-15 12:27:06.565421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.115 #39 NEW cov: 12062 ft: 14854 corp: 26/339b lim: 35 exec/s: 39 rss: 71Mb L: 11/28 MS: 1 EraseBytes- 00:06:22.116 [2024-05-15 12:27:06.615823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:5d005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.615848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.116 [2024-05-15 12:27:06.615904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:93930093 cdw11:ff0093ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.615923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.116 [2024-05-15 12:27:06.615993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00dc cdw11:dc00dce6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.616008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.116 #40 NEW cov: 12062 ft: 14870 corp: 27/363b lim: 35 exec/s: 40 rss: 71Mb L: 24/28 MS: 1 ChangeBinInt- 00:06:22.116 [2024-05-15 12:27:06.665934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.665958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.116 [2024-05-15 12:27:06.666012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3eff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.666026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.116 [2024-05-15 12:27:06.666080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff007fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.666094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.116 #41 NEW cov: 12062 ft: 14889 corp: 28/384b lim: 35 exec/s: 41 rss: 71Mb L: 21/28 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:06:22.116 [2024-05-15 12:27:06.705945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.705970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.116 [2024-05-15 12:27:06.706026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:7fff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.116 [2024-05-15 12:27:06.706039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.116 #42 NEW cov: 12062 ft: 14893 corp: 29/402b lim: 35 exec/s: 42 rss: 71Mb L: 18/28 MS: 1 CopyPart- 00:06:22.373 [2024-05-15 12:27:06.746041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.746066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.746122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.746136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.373 #43 NEW cov: 12062 ft: 14905 corp: 30/420b lim: 35 exec/s: 43 rss: 71Mb L: 18/28 MS: 1 ChangeBit- 00:06:22.373 [2024-05-15 12:27:06.786401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.786425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.786497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:9e00dc5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.786512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.786570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.786584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.786639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.786653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.373 #44 NEW cov: 12062 ft: 14937 corp: 31/449b lim: 35 exec/s: 44 rss: 71Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:22.373 [2024-05-15 12:27:06.826515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.826539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.826598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff2100ff cdw11:a300a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.826612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.826665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:a3a300a3 cdw11:a300a3a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.826678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.826733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:a3a300a3 cdw11:ff00a37f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.826746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.373 #45 NEW cov: 12062 ft: 14942 corp: 32/478b lim: 35 exec/s: 45 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:22.373 [2024-05-15 12:27:06.876307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fff3008a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.876331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.373 #46 NEW cov: 12062 ft: 15022 corp: 33/488b lim: 35 exec/s: 46 rss: 72Mb L: 10/29 MS: 1 InsertByte- 00:06:22.373 [2024-05-15 12:27:06.926472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:275d00ff cdw11:5d005d5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.926496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.373 #47 NEW cov: 12062 ft: 15027 corp: 34/498b lim: 35 exec/s: 47 rss: 72Mb L: 10/29 MS: 1 InsertByte- 00:06:22.373 [2024-05-15 12:27:06.966650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:dc00dcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.966675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.373 [2024-05-15 12:27:06.966731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:dcdc00dc cdw11:5d00dc5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.373 [2024-05-15 12:27:06.966745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.631 #48 NEW cov: 12062 ft: 15028 corp: 35/512b lim: 35 exec/s: 48 rss: 72Mb L: 14/29 MS: 1 CopyPart- 00:06:22.631 [2024-05-15 12:27:07.016808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fff3008a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.016835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.631 [2024-05-15 12:27:07.016892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.016906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.631 #49 NEW cov: 12062 ft: 15064 corp: 36/530b lim: 35 exec/s: 49 rss: 72Mb L: 18/29 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:06:22.631 [2024-05-15 12:27:07.066983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.067008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.631 [2024-05-15 12:27:07.067062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:5d007f5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.067076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.631 #50 NEW cov: 12062 ft: 15080 corp: 37/544b lim: 35 exec/s: 50 rss: 72Mb L: 14/29 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\177"- 00:06:22.631 [2024-05-15 12:27:07.107198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.107223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.631 [2024-05-15 12:27:07.107282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3eff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.107296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.631 [2024-05-15 12:27:07.107350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff007fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.107363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.631 #51 NEW cov: 12062 ft: 15088 corp: 38/565b lim: 35 exec/s: 51 rss: 72Mb L: 21/29 MS: 1 CopyPart- 00:06:22.631 [2024-05-15 12:27:07.157086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff001a cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:22.631 [2024-05-15 12:27:07.157111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.631 #52 NEW cov: 12062 ft: 15095 corp: 39/575b lim: 35 exec/s: 26 rss: 72Mb L: 10/29 MS: 1 ChangeBit- 00:06:22.631 #52 DONE cov: 12062 ft: 15095 corp: 39/575b lim: 35 exec/s: 26 rss: 72Mb 00:06:22.631 ###### Recommended dictionary. ###### 00:06:22.631 "\377\377\377\377\377\377\377\177" # Uses: 3 00:06:22.631 "\377\377\377\377\377\377\377>" # Uses: 2 00:06:22.631 ###### End of recommended dictionary. ###### 00:06:22.631 Done 52 runs in 2 second(s) 00:06:22.631 [2024-05-15 12:27:07.178409] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:22.889 12:27:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:22.889 [2024-05-15 12:27:07.333202] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:22.889 [2024-05-15 12:27:07.333276] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401346 ] 00:06:22.889 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.146 [2024-05-15 12:27:07.513460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.146 [2024-05-15 12:27:07.580663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.146 [2024-05-15 12:27:07.640145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.146 [2024-05-15 12:27:07.656095] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:23.146 [2024-05-15 12:27:07.656549] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:23.146 INFO: Running with entropic power schedule (0xFF, 100). 00:06:23.146 INFO: Seed: 2183363615 00:06:23.146 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:23.146 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:23.146 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:23.146 INFO: A corpus is not provided, starting from an empty corpus 00:06:23.146 #2 INITED exec/s: 0 rss: 63Mb 00:06:23.146 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:23.146 This may also happen if the target rejected all inputs we tried so far 00:06:23.711 NEW_FUNC[1/674]: 0x486da0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:23.711 NEW_FUNC[2/674]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:23.711 #5 NEW cov: 11706 ft: 11707 corp: 2/8b lim: 20 exec/s: 0 rss: 70Mb L: 7/7 MS: 3 CrossOver-InsertByte-CMP- DE: "\377\377\377\004"- 00:06:23.711 #6 NEW cov: 11836 ft: 12368 corp: 3/15b lim: 20 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ShuffleBytes- 00:06:23.711 #12 NEW cov: 11856 ft: 12926 corp: 4/26b lim: 20 exec/s: 0 rss: 70Mb L: 11/11 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:06:23.711 #13 NEW cov: 11941 ft: 13140 corp: 5/37b lim: 20 exec/s: 0 rss: 70Mb L: 11/11 MS: 1 ChangeBit- 00:06:23.711 #14 NEW cov: 11958 ft: 13659 corp: 6/56b lim: 20 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CMP- DE: "\217F\231\350K\007\206\000"- 00:06:23.711 #15 NEW cov: 11958 ft: 13767 corp: 7/67b lim: 20 exec/s: 0 rss: 70Mb L: 11/19 MS: 1 ShuffleBytes- 00:06:23.711 #16 NEW cov: 11958 ft: 13816 corp: 8/74b lim: 20 exec/s: 0 rss: 70Mb L: 7/19 MS: 1 ChangeBit- 00:06:23.969 #17 NEW cov: 11958 ft: 13859 corp: 9/81b lim: 20 exec/s: 0 rss: 70Mb L: 7/19 MS: 1 ChangeBinInt- 00:06:23.969 #23 NEW cov: 11962 ft: 13976 corp: 10/93b lim: 20 exec/s: 0 rss: 70Mb L: 12/19 MS: 1 InsertByte- 00:06:23.969 #24 NEW cov: 11962 ft: 14066 corp: 11/100b lim: 20 exec/s: 0 rss: 70Mb L: 7/19 MS: 1 ShuffleBytes- 00:06:23.969 #25 NEW cov: 11962 ft: 14092 corp: 12/112b lim: 20 exec/s: 0 rss: 70Mb L: 12/19 MS: 1 ChangeBinInt- 00:06:23.969 #26 NEW cov: 11962 ft: 14125 corp: 13/122b lim: 20 exec/s: 0 rss: 70Mb L: 10/19 MS: 1 EraseBytes- 00:06:23.969 #27 NEW cov: 11962 ft: 14140 corp: 14/133b lim: 20 exec/s: 0 rss: 70Mb L: 11/19 MS: 1 ChangeBinInt- 00:06:24.226 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:24.226 #28 NEW cov: 11985 ft: 14189 corp: 15/152b lim: 20 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 PersAutoDict- DE: "\217F\231\350K\007\206\000"- 00:06:24.226 #29 NEW cov: 11985 ft: 14234 corp: 16/159b lim: 20 exec/s: 0 rss: 70Mb L: 7/19 MS: 1 ChangeBit- 00:06:24.226 #30 NEW cov: 11985 ft: 14252 corp: 17/166b lim: 20 exec/s: 30 rss: 70Mb L: 7/19 MS: 1 ChangeByte- 00:06:24.226 #31 NEW cov: 11985 ft: 14276 corp: 18/177b lim: 20 exec/s: 31 rss: 71Mb L: 11/19 MS: 1 InsertByte- 00:06:24.226 #32 NEW cov: 11985 ft: 14299 corp: 19/184b lim: 20 exec/s: 32 rss: 71Mb L: 7/19 MS: 1 ChangeBinInt- 00:06:24.484 #33 NEW cov: 11985 ft: 14320 corp: 20/203b lim: 20 exec/s: 33 rss: 71Mb L: 19/19 MS: 1 CrossOver- 00:06:24.484 #34 NEW cov: 11985 ft: 14333 corp: 21/222b lim: 20 exec/s: 34 rss: 71Mb L: 19/19 MS: 1 ChangeByte- 00:06:24.484 [2024-05-15 12:27:08.936344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.484 [2024-05-15 12:27:08.936395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.484 NEW_FUNC[1/17]: 0x1193710 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3333 00:06:24.484 NEW_FUNC[2/17]: 0x1194290 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3275 00:06:24.484 #35 NEW cov: 12228 ft: 14640 corp: 22/235b lim: 20 exec/s: 35 rss: 71Mb L: 13/19 MS: 1 InsertRepeatedBytes- 00:06:24.484 NEW_FUNC[1/2]: 0x12fa410 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:06:24.484 NEW_FUNC[2/2]: 0x131b750 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3517 00:06:24.484 #36 NEW cov: 12283 ft: 14729 corp: 23/252b lim: 20 exec/s: 36 rss: 71Mb L: 17/19 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:06:24.484 #37 NEW cov: 12283 ft: 14799 corp: 24/259b lim: 20 exec/s: 37 rss: 71Mb L: 7/19 MS: 1 ChangeBit- 00:06:24.741 #43 NEW cov: 12283 ft: 14881 corp: 25/278b lim: 20 exec/s: 43 rss: 71Mb L: 19/19 MS: 1 PersAutoDict- DE: "\217F\231\350K\007\206\000"- 00:06:24.741 #44 NEW cov: 12283 ft: 14900 corp: 26/297b lim: 20 exec/s: 44 rss: 71Mb L: 19/19 MS: 1 ChangeByte- 00:06:24.741 #45 NEW cov: 12283 ft: 14925 corp: 27/304b lim: 20 exec/s: 45 rss: 71Mb L: 7/19 MS: 1 ChangeBit- 00:06:24.741 #46 NEW cov: 12283 ft: 14928 corp: 28/311b lim: 20 exec/s: 46 rss: 71Mb L: 7/19 MS: 1 ChangeBit- 00:06:24.741 #47 NEW cov: 12283 ft: 14937 corp: 29/330b lim: 20 exec/s: 47 rss: 71Mb L: 19/19 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:06:24.741 #48 NEW cov: 12283 ft: 14940 corp: 30/337b lim: 20 exec/s: 48 rss: 72Mb L: 7/19 MS: 1 ShuffleBytes- 00:06:24.998 #49 NEW cov: 12283 ft: 14942 corp: 31/344b lim: 20 exec/s: 49 rss: 72Mb L: 7/19 MS: 1 ShuffleBytes- 00:06:24.998 #50 NEW cov: 12283 ft: 14950 corp: 32/355b lim: 20 exec/s: 50 rss: 72Mb L: 11/19 MS: 1 PersAutoDict- DE: "\377\377\377\004"- 00:06:24.998 #51 NEW cov: 12283 ft: 14979 corp: 33/365b lim: 20 exec/s: 51 rss: 72Mb L: 10/19 MS: 1 CrossOver- 00:06:24.998 #52 NEW cov: 12283 ft: 14986 corp: 34/380b lim: 20 exec/s: 52 rss: 72Mb L: 15/19 MS: 1 EraseBytes- 00:06:24.998 #53 NEW cov: 12283 ft: 15004 corp: 35/391b lim: 20 exec/s: 53 rss: 72Mb L: 11/19 MS: 1 ChangeBinInt- 00:06:25.256 #54 NEW cov: 12283 ft: 15016 corp: 36/407b lim: 20 exec/s: 54 rss: 72Mb L: 16/19 MS: 1 InsertByte- 00:06:25.256 #55 NEW cov: 12283 ft: 15027 corp: 37/414b lim: 20 exec/s: 55 rss: 72Mb L: 7/19 MS: 1 ShuffleBytes- 00:06:25.256 #56 NEW cov: 12283 ft: 15099 corp: 38/434b lim: 20 exec/s: 28 rss: 72Mb L: 20/20 MS: 1 CopyPart- 00:06:25.256 #56 DONE cov: 12283 ft: 15099 corp: 38/434b lim: 20 exec/s: 28 rss: 72Mb 00:06:25.256 ###### Recommended dictionary. ###### 00:06:25.256 "\377\377\377\004" # Uses: 4 00:06:25.256 "\217F\231\350K\007\206\000" # Uses: 2 00:06:25.256 ###### End of recommended dictionary. ###### 00:06:25.256 Done 56 runs in 2 second(s) 00:06:25.256 [2024-05-15 12:27:09.711215] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:25.256 12:27:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:25.256 [2024-05-15 12:27:09.856932] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:25.256 [2024-05-15 12:27:09.856996] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401875 ] 00:06:25.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.514 [2024-05-15 12:27:10.031415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.514 [2024-05-15 12:27:10.108107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.772 [2024-05-15 12:27:10.168069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.772 [2024-05-15 12:27:10.184018] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:25.772 [2024-05-15 12:27:10.184415] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:25.772 INFO: Running with entropic power schedule (0xFF, 100). 00:06:25.772 INFO: Seed: 415401581 00:06:25.772 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:25.772 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:25.772 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:25.772 INFO: A corpus is not provided, starting from an empty corpus 00:06:25.772 #2 INITED exec/s: 0 rss: 63Mb 00:06:25.772 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:25.772 This may also happen if the target rejected all inputs we tried so far 00:06:25.772 [2024-05-15 12:27:10.232953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.772 [2024-05-15 12:27:10.232981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.772 [2024-05-15 12:27:10.233036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.772 [2024-05-15 12:27:10.233050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.030 NEW_FUNC[1/686]: 0x487e90 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:26.030 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:26.030 #7 NEW cov: 11830 ft: 11831 corp: 2/20b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 5 ChangeBinInt-ChangeByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:26.030 [2024-05-15 12:27:10.563879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.030 [2024-05-15 12:27:10.563919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.030 [2024-05-15 12:27:10.563985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.030 [2024-05-15 12:27:10.564003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.030 #8 NEW cov: 11960 ft: 12504 corp: 3/39b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 ChangeBit- 00:06:26.030 [2024-05-15 12:27:10.613869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.030 [2024-05-15 12:27:10.613896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.030 [2024-05-15 12:27:10.613966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.030 [2024-05-15 12:27:10.613980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.030 #9 NEW cov: 11966 ft: 12804 corp: 4/59b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:06:26.287 [2024-05-15 12:27:10.653997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.654024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.287 [2024-05-15 12:27:10.654082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffdf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.654098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.287 #10 NEW cov: 12051 ft: 13115 corp: 5/78b lim: 35 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ChangeBit- 00:06:26.287 [2024-05-15 12:27:10.704148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.704174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.287 [2024-05-15 12:27:10.704245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.704259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.287 #11 NEW cov: 12051 ft: 13193 corp: 6/98b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeByte- 00:06:26.287 [2024-05-15 12:27:10.754265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.754292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.287 [2024-05-15 12:27:10.754362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.754377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.287 #12 NEW cov: 12051 ft: 13287 corp: 7/118b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:26.287 [2024-05-15 12:27:10.804359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.804390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.287 [2024-05-15 12:27:10.804462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffb6ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.804476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.287 #18 NEW cov: 12051 ft: 13395 corp: 8/138b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeByte- 00:06:26.287 [2024-05-15 12:27:10.844471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.844496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.287 [2024-05-15 12:27:10.844550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:b7ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.287 [2024-05-15 12:27:10.844565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.287 #19 NEW cov: 12051 ft: 13431 corp: 9/158b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ChangeByte- 00:06:26.287 [2024-05-15 12:27:10.894616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:23ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.288 [2024-05-15 12:27:10.894642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.288 [2024-05-15 12:27:10.894698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff29ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.288 [2024-05-15 12:27:10.894713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.545 #20 NEW cov: 12051 ft: 13448 corp: 10/178b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:06:26.545 [2024-05-15 12:27:10.934761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:10.934790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:10.934860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:10.934874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.545 #21 NEW cov: 12051 ft: 13484 corp: 11/198b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:06:26.545 [2024-05-15 12:27:10.975140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:10.975165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:10.975238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:10.975252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:10.975305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff29ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:10.975318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:10.975371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:10.975389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.545 #22 NEW cov: 12051 ft: 13836 corp: 12/228b lim: 35 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:06:26.545 [2024-05-15 12:27:11.014915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff230a23 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.014940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:11.014995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.015009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.545 #28 NEW cov: 12051 ft: 13865 corp: 13/248b lim: 35 exec/s: 0 rss: 70Mb L: 20/30 MS: 1 CopyPart- 00:06:26.545 [2024-05-15 12:27:11.054916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.054941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.545 #29 NEW cov: 12051 ft: 14634 corp: 14/260b lim: 35 exec/s: 0 rss: 71Mb L: 12/30 MS: 1 EraseBytes- 00:06:26.545 [2024-05-15 12:27:11.105367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:b8ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.105397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:11.105455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.105468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:11.105537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.105554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.545 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:26.545 #30 NEW cov: 12074 ft: 14865 corp: 15/281b lim: 35 exec/s: 0 rss: 71Mb L: 21/30 MS: 1 InsertByte- 00:06:26.545 [2024-05-15 12:27:11.145351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:7dff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.145376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.545 [2024-05-15 12:27:11.145439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffdf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.545 [2024-05-15 12:27:11.145453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.803 #31 NEW cov: 12074 ft: 14894 corp: 16/300b lim: 35 exec/s: 0 rss: 71Mb L: 19/30 MS: 1 ChangeBit- 00:06:26.803 [2024-05-15 12:27:11.195654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.195679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.803 [2024-05-15 12:27:11.195736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff3aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.195751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.803 [2024-05-15 12:27:11.195803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.195816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.803 #32 NEW cov: 12074 ft: 14919 corp: 17/321b lim: 35 exec/s: 32 rss: 71Mb L: 21/30 MS: 1 InsertByte- 00:06:26.803 [2024-05-15 12:27:11.235585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff230a23 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.235610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.803 [2024-05-15 12:27:11.235664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001400 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.235678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.803 #33 NEW cov: 12074 ft: 14941 corp: 18/341b lim: 35 exec/s: 33 rss: 71Mb L: 20/30 MS: 1 ChangeBinInt- 00:06:26.803 [2024-05-15 12:27:11.285926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.285951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.803 [2024-05-15 12:27:11.286008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.286022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.803 [2024-05-15 12:27:11.286074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.803 [2024-05-15 12:27:11.286091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.803 #34 NEW cov: 12074 ft: 14958 corp: 19/362b lim: 35 exec/s: 34 rss: 71Mb L: 21/30 MS: 1 InsertByte- 00:06:26.804 [2024-05-15 12:27:11.326159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.326184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.804 [2024-05-15 12:27:11.326240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.326254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.804 [2024-05-15 12:27:11.326309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.326322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.804 [2024-05-15 12:27:11.326374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.326392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.804 #35 NEW cov: 12074 ft: 14973 corp: 20/390b lim: 35 exec/s: 35 rss: 71Mb L: 28/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:26.804 [2024-05-15 12:27:11.365983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.366008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.804 [2024-05-15 12:27:11.366064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:bfffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.366078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.804 #36 NEW cov: 12074 ft: 14990 corp: 21/409b lim: 35 exec/s: 36 rss: 71Mb L: 19/30 MS: 1 ChangeBit- 00:06:26.804 [2024-05-15 12:27:11.406230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.406255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.804 [2024-05-15 12:27:11.406325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.406339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.804 [2024-05-15 12:27:11.406396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.804 [2024-05-15 12:27:11.406409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.061 #37 NEW cov: 12074 ft: 15006 corp: 22/433b lim: 35 exec/s: 37 rss: 71Mb L: 24/30 MS: 1 EraseBytes- 00:06:27.061 [2024-05-15 12:27:11.456235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.456260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.061 [2024-05-15 12:27:11.456315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffb600ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.456332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.061 #38 NEW cov: 12074 ft: 15027 corp: 23/453b lim: 35 exec/s: 38 rss: 71Mb L: 20/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:27.061 [2024-05-15 12:27:11.506664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.506688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.061 [2024-05-15 12:27:11.506760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffb600ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.506774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.061 [2024-05-15 12:27:11.506827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0100ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.506841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.061 [2024-05-15 12:27:11.506894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.506907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.061 #39 NEW cov: 12074 ft: 15053 corp: 24/481b lim: 35 exec/s: 39 rss: 72Mb L: 28/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:27.061 [2024-05-15 12:27:11.556529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff230a23 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.556553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.061 [2024-05-15 12:27:11.556609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001400 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.556623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.061 #40 NEW cov: 12074 ft: 15080 corp: 25/501b lim: 35 exec/s: 40 rss: 72Mb L: 20/30 MS: 1 CopyPart- 00:06:27.061 [2024-05-15 12:27:11.606515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.061 [2024-05-15 12:27:11.606540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.061 #43 NEW cov: 12074 ft: 15095 corp: 26/512b lim: 35 exec/s: 43 rss: 72Mb L: 11/30 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:27.061 [2024-05-15 12:27:11.646740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.062 [2024-05-15 12:27:11.646765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.062 [2024-05-15 12:27:11.646823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.062 [2024-05-15 12:27:11.646836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.062 #44 NEW cov: 12074 ft: 15105 corp: 27/531b lim: 35 exec/s: 44 rss: 72Mb L: 19/30 MS: 1 CopyPart- 00:06:27.320 [2024-05-15 12:27:11.686990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.687018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.687076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.687089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.687143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.687156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.320 #45 NEW cov: 12074 ft: 15157 corp: 28/555b lim: 35 exec/s: 45 rss: 72Mb L: 24/30 MS: 1 ShuffleBytes- 00:06:27.320 [2024-05-15 12:27:11.737015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff233a cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.737040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.737098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffdf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.737111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.320 #46 NEW cov: 12074 ft: 15231 corp: 29/574b lim: 35 exec/s: 46 rss: 72Mb L: 19/30 MS: 1 ChangeByte- 00:06:27.320 [2024-05-15 12:27:11.777259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.777284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.777341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.777355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.777409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.777423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.320 #47 NEW cov: 12074 ft: 15242 corp: 30/601b lim: 35 exec/s: 47 rss: 72Mb L: 27/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:27.320 [2024-05-15 12:27:11.817268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.817293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.817346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:05e8007f cdw11:20230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.817359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.320 #48 NEW cov: 12074 ft: 15262 corp: 31/621b lim: 35 exec/s: 48 rss: 72Mb L: 20/30 MS: 1 CMP- DE: "\000\000\177\005\350 #1"- 00:06:27.320 [2024-05-15 12:27:11.857374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.857406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.857466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.857479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.320 #49 NEW cov: 12074 ft: 15275 corp: 32/641b lim: 35 exec/s: 49 rss: 72Mb L: 20/30 MS: 1 EraseBytes- 00:06:27.320 [2024-05-15 12:27:11.907524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.907549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.320 [2024-05-15 12:27:11.907604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.320 [2024-05-15 12:27:11.907618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.578 #50 NEW cov: 12074 ft: 15311 corp: 33/660b lim: 35 exec/s: 50 rss: 72Mb L: 19/30 MS: 1 ChangeByte- 00:06:27.578 [2024-05-15 12:27:11.957649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:11.957673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:11.957731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:b7ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:11.957745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.578 #51 NEW cov: 12074 ft: 15316 corp: 34/680b lim: 35 exec/s: 51 rss: 72Mb L: 20/30 MS: 1 ChangeByte- 00:06:27.578 [2024-05-15 12:27:12.007801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff23ff cdw11:fdff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.007826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:12.007884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.007897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.578 #52 NEW cov: 12074 ft: 15331 corp: 35/699b lim: 35 exec/s: 52 rss: 72Mb L: 19/30 MS: 1 ChangeBinInt- 00:06:27.578 [2024-05-15 12:27:12.057899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.057924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:12.057983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0a23 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.057997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.578 #53 NEW cov: 12074 ft: 15340 corp: 36/719b lim: 35 exec/s: 53 rss: 72Mb L: 20/30 MS: 1 CopyPart- 00:06:27.578 [2024-05-15 12:27:12.098213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.098239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:12.098296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff5129 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.098316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:12.098369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffb7 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.098387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.578 #54 NEW cov: 12074 ft: 15367 corp: 37/743b lim: 35 exec/s: 54 rss: 73Mb L: 24/30 MS: 1 InsertRepeatedBytes- 00:06:27.578 [2024-05-15 12:27:12.138289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:b8ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.138315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:12.138371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff2923ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.138390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.578 [2024-05-15 12:27:12.138442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.138456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.578 #55 NEW cov: 12074 ft: 15369 corp: 38/764b lim: 35 exec/s: 55 rss: 73Mb L: 21/30 MS: 1 CrossOver- 00:06:27.578 [2024-05-15 12:27:12.188481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0a23 cdw11:29ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.578 [2024-05-15 12:27:12.188506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.579 [2024-05-15 12:27:12.188563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:05e8007f cdw11:20230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.579 [2024-05-15 12:27:12.188577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.579 [2024-05-15 12:27:12.188644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.579 [2024-05-15 12:27:12.188658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.837 #56 NEW cov: 12074 ft: 15379 corp: 39/787b lim: 35 exec/s: 28 rss: 73Mb L: 23/30 MS: 1 InsertRepeatedBytes- 00:06:27.837 #56 DONE cov: 12074 ft: 15379 corp: 39/787b lim: 35 exec/s: 28 rss: 73Mb 00:06:27.837 ###### Recommended dictionary. ###### 00:06:27.837 "\001\000\000\000\000\000\000\000" # Uses: 3 00:06:27.837 "\000\000\177\005\350 #1" # Uses: 0 00:06:27.837 ###### End of recommended dictionary. ###### 00:06:27.837 Done 56 runs in 2 second(s) 00:06:27.837 [2024-05-15 12:27:12.219652] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:27.837 12:27:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:27.837 [2024-05-15 12:27:12.384121] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:27.837 [2024-05-15 12:27:12.384198] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402295 ] 00:06:27.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.095 [2024-05-15 12:27:12.565036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.095 [2024-05-15 12:27:12.631355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.095 [2024-05-15 12:27:12.690599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.095 [2024-05-15 12:27:12.706550] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:28.095 [2024-05-15 12:27:12.706965] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:28.352 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.352 INFO: Seed: 2939392383 00:06:28.352 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:28.352 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:28.352 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:28.352 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.352 #2 INITED exec/s: 0 rss: 63Mb 00:06:28.352 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.352 This may also happen if the target rejected all inputs we tried so far 00:06:28.352 [2024-05-15 12:27:12.752635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.352 [2024-05-15 12:27:12.752663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.352 [2024-05-15 12:27:12.752717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.352 [2024-05-15 12:27:12.752731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.352 [2024-05-15 12:27:12.752784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.352 [2024-05-15 12:27:12.752801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.352 [2024-05-15 12:27:12.752854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.353 [2024-05-15 12:27:12.752867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.610 NEW_FUNC[1/686]: 0x48a020 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:28.610 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:28.610 #3 NEW cov: 11841 ft: 11837 corp: 2/44b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:06:28.610 [2024-05-15 12:27:13.083048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.083081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.610 [2024-05-15 12:27:13.083136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.083150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.610 #7 NEW cov: 11971 ft: 12769 corp: 3/66b lim: 45 exec/s: 0 rss: 70Mb L: 22/43 MS: 4 CopyPart-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:06:28.610 [2024-05-15 12:27:13.123426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.123451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.610 [2024-05-15 12:27:13.123504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.123518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.610 [2024-05-15 12:27:13.123567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.123581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.610 [2024-05-15 12:27:13.123633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.123646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.610 #13 NEW cov: 11977 ft: 12956 corp: 4/109b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 ChangeByte- 00:06:28.610 [2024-05-15 12:27:13.173215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00007a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.173240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.610 [2024-05-15 12:27:13.173292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.173305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.610 #16 NEW cov: 12062 ft: 13369 corp: 5/128b lim: 45 exec/s: 0 rss: 70Mb L: 19/43 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:06:28.610 [2024-05-15 12:27:13.213389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.610 [2024-05-15 12:27:13.213414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.611 [2024-05-15 12:27:13.213467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.611 [2024-05-15 12:27:13.213481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.868 #17 NEW cov: 12062 ft: 13440 corp: 6/151b lim: 45 exec/s: 0 rss: 70Mb L: 23/43 MS: 1 InsertByte- 00:06:28.868 [2024-05-15 12:27:13.263762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.263787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.263855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.263869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.263921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.263934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.263984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.263998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.868 #18 NEW cov: 12062 ft: 13544 corp: 7/194b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 ChangeBit- 00:06:28.868 [2024-05-15 12:27:13.313915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.313940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.313992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.314006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.314055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.314067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.314118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.314131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.868 #19 NEW cov: 12062 ft: 13635 corp: 8/237b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 ShuffleBytes- 00:06:28.868 [2024-05-15 12:27:13.364061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.364086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.364157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:82020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.364171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.364222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02fd0202 cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.364236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.364286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.364299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.868 #20 NEW cov: 12062 ft: 13678 corp: 9/280b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 ChangeBinInt- 00:06:28.868 [2024-05-15 12:27:13.404130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.868 [2024-05-15 12:27:13.404156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 [2024-05-15 12:27:13.404210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.404224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.869 [2024-05-15 12:27:13.404274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.404287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.869 [2024-05-15 12:27:13.404339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.404352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.869 #21 NEW cov: 12062 ft: 13712 corp: 10/323b lim: 45 exec/s: 0 rss: 70Mb L: 43/43 MS: 1 ChangeByte- 00:06:28.869 [2024-05-15 12:27:13.443927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3a3afe3a cdw11:3a3a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.443952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.869 [2024-05-15 12:27:13.444005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.444018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.869 #27 NEW cov: 12062 ft: 13752 corp: 11/345b lim: 45 exec/s: 0 rss: 70Mb L: 22/43 MS: 1 ChangeBinInt- 00:06:28.869 [2024-05-15 12:27:13.484073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.484098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.869 [2024-05-15 12:27:13.484151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.869 [2024-05-15 12:27:13.484165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.126 #28 NEW cov: 12062 ft: 13798 corp: 12/367b lim: 45 exec/s: 0 rss: 70Mb L: 22/43 MS: 1 EraseBytes- 00:06:29.126 [2024-05-15 12:27:13.524312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.524337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.126 [2024-05-15 12:27:13.524395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ac5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.524409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.126 [2024-05-15 12:27:13.524458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.524471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.126 #29 NEW cov: 12062 ft: 14034 corp: 13/395b lim: 45 exec/s: 0 rss: 71Mb L: 28/43 MS: 1 CrossOver- 00:06:29.126 [2024-05-15 12:27:13.574312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.574338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.126 [2024-05-15 12:27:13.574392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.574405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.126 #30 NEW cov: 12062 ft: 14106 corp: 14/414b lim: 45 exec/s: 0 rss: 71Mb L: 19/43 MS: 1 EraseBytes- 00:06:29.126 [2024-05-15 12:27:13.624424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.624450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.126 [2024-05-15 12:27:13.624503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.126 [2024-05-15 12:27:13.624517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.126 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:29.126 #31 NEW cov: 12085 ft: 14147 corp: 15/433b lim: 45 exec/s: 0 rss: 71Mb L: 19/43 MS: 1 ShuffleBytes- 00:06:29.127 [2024-05-15 12:27:13.674899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.127 [2024-05-15 12:27:13.674925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.127 [2024-05-15 12:27:13.674979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.127 [2024-05-15 12:27:13.674992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.127 [2024-05-15 12:27:13.675042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.127 [2024-05-15 12:27:13.675055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.127 [2024-05-15 12:27:13.675104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.127 [2024-05-15 12:27:13.675121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.127 #32 NEW cov: 12085 ft: 14257 corp: 16/476b lim: 45 exec/s: 0 rss: 71Mb L: 43/43 MS: 1 ShuffleBytes- 00:06:29.127 [2024-05-15 12:27:13.714726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.127 [2024-05-15 12:27:13.714751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.127 [2024-05-15 12:27:13.714805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c53fc5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.127 [2024-05-15 12:27:13.714818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.127 #33 NEW cov: 12085 ft: 14268 corp: 17/500b lim: 45 exec/s: 0 rss: 71Mb L: 24/43 MS: 1 InsertByte- 00:06:29.384 [2024-05-15 12:27:13.754977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.755002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.755056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ac5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.755070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.755120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c51f0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.755133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.384 #34 NEW cov: 12085 ft: 14296 corp: 18/529b lim: 45 exec/s: 34 rss: 71Mb L: 29/43 MS: 1 InsertByte- 00:06:29.384 [2024-05-15 12:27:13.804991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.805016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.805070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.805083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.384 #35 NEW cov: 12085 ft: 14304 corp: 19/552b lim: 45 exec/s: 35 rss: 71Mb L: 23/43 MS: 1 ChangeASCIIInt- 00:06:29.384 [2024-05-15 12:27:13.845385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.845410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.845464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfcfd cdw11:82020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.845478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.845526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02fd0202 cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.845539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.845590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.845607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.384 #36 NEW cov: 12085 ft: 14331 corp: 20/595b lim: 45 exec/s: 36 rss: 71Mb L: 43/43 MS: 1 ChangeBit- 00:06:29.384 [2024-05-15 12:27:13.895229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.895255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.895308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.895321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.384 #37 NEW cov: 12085 ft: 14379 corp: 21/618b lim: 45 exec/s: 37 rss: 71Mb L: 23/43 MS: 1 ChangeASCIIInt- 00:06:29.384 [2024-05-15 12:27:13.945401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.945426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.945478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.945492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.384 #38 NEW cov: 12085 ft: 14385 corp: 22/640b lim: 45 exec/s: 38 rss: 71Mb L: 22/43 MS: 1 CrossOver- 00:06:29.384 [2024-05-15 12:27:13.985781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:03fd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.985805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.985858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.985872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.985921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.985934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.384 [2024-05-15 12:27:13.985985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.384 [2024-05-15 12:27:13.985998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.644 #39 NEW cov: 12085 ft: 14395 corp: 23/683b lim: 45 exec/s: 39 rss: 71Mb L: 43/43 MS: 1 ChangeByte- 00:06:29.644 [2024-05-15 12:27:14.035611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:03fd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.035636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.644 [2024-05-15 12:27:14.035689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0afdfdfd cdw11:fd7e0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.035703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.644 #40 NEW cov: 12085 ft: 14408 corp: 24/707b lim: 45 exec/s: 40 rss: 71Mb L: 24/43 MS: 1 CrossOver- 00:06:29.644 [2024-05-15 12:27:14.085870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:2bc50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.085895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.644 [2024-05-15 12:27:14.085947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ac5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.085961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.644 [2024-05-15 12:27:14.086011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c51f0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.086024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.644 #41 NEW cov: 12085 ft: 14415 corp: 25/736b lim: 45 exec/s: 41 rss: 72Mb L: 29/43 MS: 1 ChangeByte- 00:06:29.644 [2024-05-15 12:27:14.135885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.135911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.644 [2024-05-15 12:27:14.135964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c53fc5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.644 [2024-05-15 12:27:14.135977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.644 #42 NEW cov: 12085 ft: 14429 corp: 26/760b lim: 45 exec/s: 42 rss: 72Mb L: 24/43 MS: 1 ChangeByte- 00:06:29.645 [2024-05-15 12:27:14.186049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50a18 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.645 [2024-05-15 12:27:14.186073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.645 [2024-05-15 12:27:14.186125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c53fc5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.645 [2024-05-15 12:27:14.186139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.645 #43 NEW cov: 12085 ft: 14432 corp: 27/784b lim: 45 exec/s: 43 rss: 72Mb L: 24/43 MS: 1 ChangeBinInt- 00:06:29.645 [2024-05-15 12:27:14.236029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.645 [2024-05-15 12:27:14.236054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 #44 NEW cov: 12085 ft: 15159 corp: 28/801b lim: 45 exec/s: 44 rss: 72Mb L: 17/43 MS: 1 EraseBytes- 00:06:29.918 [2024-05-15 12:27:14.286362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.286391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.286470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.286483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.918 #45 NEW cov: 12085 ft: 15173 corp: 29/824b lim: 45 exec/s: 45 rss: 72Mb L: 23/43 MS: 1 ChangeByte- 00:06:29.918 [2024-05-15 12:27:14.326253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3ac5fe3a cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.326278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 #46 NEW cov: 12085 ft: 15190 corp: 30/839b lim: 45 exec/s: 46 rss: 72Mb L: 15/43 MS: 1 EraseBytes- 00:06:29.918 [2024-05-15 12:27:14.376519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.376544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.376599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.376613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.918 #47 NEW cov: 12085 ft: 15208 corp: 31/862b lim: 45 exec/s: 47 rss: 72Mb L: 23/43 MS: 1 CMP- DE: "\377\027"- 00:06:29.918 [2024-05-15 12:27:14.417097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fdfd0afd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.417121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.417190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:7efd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.417203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.417256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.417270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.417320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.417332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.417386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.417398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.918 #48 NEW cov: 12085 ft: 15314 corp: 32/907b lim: 45 exec/s: 48 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:06:29.918 [2024-05-15 12:27:14.456737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c1c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.456763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.456816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.456830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.918 #49 NEW cov: 12085 ft: 15322 corp: 33/930b lim: 45 exec/s: 49 rss: 72Mb L: 23/45 MS: 1 ChangeBit- 00:06:29.918 [2024-05-15 12:27:14.497299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0afdff17 cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.497324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.497395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.497410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.497463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02fd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.497476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.497525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.497538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.918 [2024-05-15 12:27:14.497587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.918 [2024-05-15 12:27:14.497600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.918 #50 NEW cov: 12085 ft: 15351 corp: 34/975b lim: 45 exec/s: 50 rss: 72Mb L: 45/45 MS: 1 PersAutoDict- DE: "\377\027"- 00:06:30.193 [2024-05-15 12:27:14.536839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3a3a2dfe cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.536865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.193 #51 NEW cov: 12085 ft: 15358 corp: 35/991b lim: 45 exec/s: 51 rss: 72Mb L: 16/45 MS: 1 InsertByte- 00:06:30.193 [2024-05-15 12:27:14.587255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.587281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.587334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c533 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.587348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.587402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5330006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.587415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.193 #52 NEW cov: 12085 ft: 15362 corp: 36/1022b lim: 45 exec/s: 52 rss: 72Mb L: 31/45 MS: 1 CopyPart- 00:06:30.193 [2024-05-15 12:27:14.637390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c50ac5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.637415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.637467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0ac5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.637481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.637532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c50a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.637545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.193 #53 NEW cov: 12085 ft: 15399 corp: 37/1050b lim: 45 exec/s: 53 rss: 72Mb L: 28/45 MS: 1 CopyPart- 00:06:30.193 [2024-05-15 12:27:14.677788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0afdff17 cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.677812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.677866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.677879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.677932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02fd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.677945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.677994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.678007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.193 [2024-05-15 12:27:14.678057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.193 [2024-05-15 12:27:14.678070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:30.193 #54 NEW cov: 12085 ft: 15407 corp: 38/1095b lim: 45 exec/s: 54 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:06:30.194 [2024-05-15 12:27:14.727633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00007a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.194 [2024-05-15 12:27:14.727658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.194 [2024-05-15 12:27:14.727714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.194 [2024-05-15 12:27:14.727728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.194 [2024-05-15 12:27:14.727796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.194 [2024-05-15 12:27:14.727809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.194 #55 NEW cov: 12085 ft: 15453 corp: 39/1130b lim: 45 exec/s: 27 rss: 73Mb L: 35/45 MS: 1 CopyPart- 00:06:30.194 #55 DONE cov: 12085 ft: 15453 corp: 39/1130b lim: 45 exec/s: 27 rss: 73Mb 00:06:30.194 ###### Recommended dictionary. ###### 00:06:30.194 "\377\027" # Uses: 1 00:06:30.194 ###### End of recommended dictionary. ###### 00:06:30.194 Done 55 runs in 2 second(s) 00:06:30.194 [2024-05-15 12:27:14.758611] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:30.452 12:27:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:30.452 [2024-05-15 12:27:14.926368] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:30.452 [2024-05-15 12:27:14.926462] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402702 ] 00:06:30.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.710 [2024-05-15 12:27:15.102723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.710 [2024-05-15 12:27:15.168137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.710 [2024-05-15 12:27:15.227614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.710 [2024-05-15 12:27:15.243552] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:30.710 [2024-05-15 12:27:15.243988] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:30.710 INFO: Running with entropic power schedule (0xFF, 100). 00:06:30.710 INFO: Seed: 1182432117 00:06:30.710 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:30.710 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:30.710 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:30.710 INFO: A corpus is not provided, starting from an empty corpus 00:06:30.710 #2 INITED exec/s: 0 rss: 63Mb 00:06:30.710 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:30.710 This may also happen if the target rejected all inputs we tried so far 00:06:30.710 [2024-05-15 12:27:15.289175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.710 [2024-05-15 12:27:15.289203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.967 NEW_FUNC[1/683]: 0x48c830 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:30.967 NEW_FUNC[2/683]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:30.967 #3 NEW cov: 11739 ft: 11757 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:31.225 [2024-05-15 12:27:15.599912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.599944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.225 [2024-05-15 12:27:15.599997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.600010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.225 NEW_FUNC[1/1]: 0xf2dc20 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:415 00:06:31.225 #4 NEW cov: 11888 ft: 12616 corp: 3/8b lim: 10 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377\021"- 00:06:31.225 [2024-05-15 12:27:15.639967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.639993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.225 [2024-05-15 12:27:15.640043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.640057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.225 #5 NEW cov: 11894 ft: 12779 corp: 4/13b lim: 10 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 PersAutoDict- DE: "\377\377\377\021"- 00:06:31.225 [2024-05-15 12:27:15.680099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.680125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.225 [2024-05-15 12:27:15.680177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.680191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.225 #7 NEW cov: 11979 ft: 13010 corp: 5/18b lim: 10 exec/s: 0 rss: 70Mb L: 5/5 MS: 2 ChangeBinInt-PersAutoDict- DE: "\377\377\377\021"- 00:06:31.225 [2024-05-15 12:27:15.720129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.720155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.225 #8 NEW cov: 11979 ft: 13120 corp: 6/20b lim: 10 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:06:31.225 [2024-05-15 12:27:15.770576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.225 [2024-05-15 12:27:15.770601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.226 [2024-05-15 12:27:15.770669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:31.226 [2024-05-15 12:27:15.770683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.226 [2024-05-15 12:27:15.770734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:06:31.226 [2024-05-15 12:27:15.770747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.226 [2024-05-15 12:27:15.770797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.226 [2024-05-15 12:27:15.770810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.226 #9 NEW cov: 11979 ft: 13497 corp: 7/29b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 PersAutoDict- DE: "\377\377\377\021"- 00:06:31.226 [2024-05-15 12:27:15.820374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.226 [2024-05-15 12:27:15.820403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.483 #10 NEW cov: 11979 ft: 13568 corp: 8/32b lim: 10 exec/s: 0 rss: 70Mb L: 3/9 MS: 1 EraseBytes- 00:06:31.483 [2024-05-15 12:27:15.870755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.483 [2024-05-15 12:27:15.870780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.483 [2024-05-15 12:27:15.870831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff2f cdw11:00000000 00:06:31.483 [2024-05-15 12:27:15.870844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.483 [2024-05-15 12:27:15.870895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:31.483 [2024-05-15 12:27:15.870924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.483 #11 NEW cov: 11979 ft: 13739 corp: 9/38b lim: 10 exec/s: 0 rss: 70Mb L: 6/9 MS: 1 InsertByte- 00:06:31.483 [2024-05-15 12:27:15.910660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0a cdw11:00000000 00:06:31.483 [2024-05-15 12:27:15.910685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.483 #12 NEW cov: 11979 ft: 13828 corp: 10/40b lim: 10 exec/s: 0 rss: 70Mb L: 2/9 MS: 1 ChangeBit- 00:06:31.483 [2024-05-15 12:27:15.951031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.483 [2024-05-15 12:27:15.951056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.483 [2024-05-15 12:27:15.951108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:31.484 [2024-05-15 12:27:15.951121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:15.951170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:15.951183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:15.951234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:15.951247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.484 #13 NEW cov: 11979 ft: 13884 corp: 11/49b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ChangeByte- 00:06:31.484 [2024-05-15 12:27:16.001304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.001329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.001385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.001398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.001447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.001460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.001511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.001524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.001573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.001586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.484 #14 NEW cov: 11979 ft: 13948 corp: 12/59b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\003"- 00:06:31.484 [2024-05-15 12:27:16.051396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.051420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.051471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.051485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.051533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.051546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.051596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.051608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.484 [2024-05-15 12:27:16.051658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.484 [2024-05-15 12:27:16.051671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.484 #15 NEW cov: 11979 ft: 13976 corp: 13/69b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:31.742 [2024-05-15 12:27:16.101271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.101296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.101348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.101362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.742 #16 NEW cov: 11979 ft: 13995 corp: 14/74b lim: 10 exec/s: 0 rss: 70Mb L: 5/10 MS: 1 ChangeBit- 00:06:31.742 [2024-05-15 12:27:16.141670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.141695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.141745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.141757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.141809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.141838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.141887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.141903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.141953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.141967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.742 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:31.742 #17 NEW cov: 12002 ft: 14019 corp: 15/84b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:31.742 [2024-05-15 12:27:16.191835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.191859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.191910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000018 cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.191923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.191972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.191986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.192035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.192048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.192098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.192110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.742 #23 NEW cov: 12002 ft: 14051 corp: 16/94b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CMP- DE: "\000\000\000\030"- 00:06:31.742 [2024-05-15 12:27:16.231649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.231673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.231740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffaa cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.231753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.742 #24 NEW cov: 12002 ft: 14067 corp: 17/99b lim: 10 exec/s: 0 rss: 70Mb L: 5/10 MS: 1 ChangeByte- 00:06:31.742 [2024-05-15 12:27:16.281676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.281701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 #25 NEW cov: 12002 ft: 14087 corp: 18/102b lim: 10 exec/s: 25 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:06:31.742 [2024-05-15 12:27:16.321906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.321931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.742 [2024-05-15 12:27:16.321999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.742 [2024-05-15 12:27:16.322012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.742 #26 NEW cov: 12002 ft: 14123 corp: 19/107b lim: 10 exec/s: 26 rss: 70Mb L: 5/10 MS: 1 CopyPart- 00:06:32.000 [2024-05-15 12:27:16.362229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.362254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.362304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.362317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.362368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.362385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.362434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffaa cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.362447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.000 #27 NEW cov: 12002 ft: 14168 corp: 20/116b lim: 10 exec/s: 27 rss: 70Mb L: 9/10 MS: 1 PersAutoDict- DE: "\377\377\377\021"- 00:06:32.000 [2024-05-15 12:27:16.412500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.412524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.412575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.412588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.412638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.412666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.412717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffaa cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.412730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.412779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000110a cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.412792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.000 #28 NEW cov: 12002 ft: 14174 corp: 21/126b lim: 10 exec/s: 28 rss: 71Mb L: 10/10 MS: 1 CrossOver- 00:06:32.000 [2024-05-15 12:27:16.462610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.462634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.462685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.462698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.000 [2024-05-15 12:27:16.462749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.000 [2024-05-15 12:27:16.462762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.462831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.462844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.462896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.462909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.001 #29 NEW cov: 12002 ft: 14186 corp: 22/136b lim: 10 exec/s: 29 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:06:32.001 [2024-05-15 12:27:16.502521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.502547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.502614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff43 cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.502627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.502679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002fff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.502692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.001 #30 NEW cov: 12002 ft: 14251 corp: 23/143b lim: 10 exec/s: 30 rss: 71Mb L: 7/10 MS: 1 InsertByte- 00:06:32.001 [2024-05-15 12:27:16.552767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.552792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.552843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.552856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.552905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.552918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.552968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.552981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.001 #31 NEW cov: 12002 ft: 14254 corp: 24/152b lim: 10 exec/s: 31 rss: 71Mb L: 9/10 MS: 1 CopyPart- 00:06:32.001 [2024-05-15 12:27:16.602772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000700 cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.602796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.602846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.602859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.001 [2024-05-15 12:27:16.602909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002fff cdw11:00000000 00:06:32.001 [2024-05-15 12:27:16.602923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 #32 NEW cov: 12002 ft: 14266 corp: 25/159b lim: 10 exec/s: 32 rss: 71Mb L: 7/10 MS: 1 ChangeBinInt- 00:06:32.259 [2024-05-15 12:27:16.652694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004191 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.652718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 #34 NEW cov: 12002 ft: 14279 corp: 26/161b lim: 10 exec/s: 34 rss: 71Mb L: 2/10 MS: 2 ChangeByte-InsertByte- 00:06:32.259 [2024-05-15 12:27:16.693130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.693155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.693206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.693219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.693270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.693299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.693351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.693364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 #35 NEW cov: 12002 ft: 14333 corp: 27/170b lim: 10 exec/s: 35 rss: 71Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:32.259 [2024-05-15 12:27:16.733363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.733392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.733444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.733457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.733506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000011ff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.733520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.733567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000011ff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.733580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.733628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000aa11 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.733641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.259 #36 NEW cov: 12002 ft: 14343 corp: 28/180b lim: 10 exec/s: 36 rss: 71Mb L: 10/10 MS: 1 CrossOver- 00:06:32.259 [2024-05-15 12:27:16.773479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.773504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.773558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.773571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.773623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.773640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.773691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.773704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.773756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.773769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.259 #37 NEW cov: 12002 ft: 14354 corp: 29/190b lim: 10 exec/s: 37 rss: 71Mb L: 10/10 MS: 1 CrossOver- 00:06:32.259 [2024-05-15 12:27:16.813297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.813322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.813375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.813394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 #38 NEW cov: 12002 ft: 14385 corp: 30/195b lim: 10 exec/s: 38 rss: 71Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:32.259 [2024-05-15 12:27:16.853718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.853742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.853810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001800 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.853824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.853874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.853887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.853937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:32.259 [2024-05-15 12:27:16.853951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.259 [2024-05-15 12:27:16.853999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:32.260 [2024-05-15 12:27:16.854013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.518 #39 NEW cov: 12002 ft: 14404 corp: 31/205b lim: 10 exec/s: 39 rss: 71Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:32.518 [2024-05-15 12:27:16.903930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.903956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.904010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000018 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.904024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.904074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.904087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.904141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.904154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.904205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0f cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.904218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.518 #40 NEW cov: 12002 ft: 14427 corp: 32/215b lim: 10 exec/s: 40 rss: 71Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:32.518 [2024-05-15 12:27:16.943808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.943835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.943888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff43 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.943902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.943953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000030ff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.943966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.518 #41 NEW cov: 12002 ft: 14456 corp: 33/222b lim: 10 exec/s: 41 rss: 71Mb L: 7/10 MS: 1 ChangeBinInt- 00:06:32.518 [2024-05-15 12:27:16.983784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.983809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:16.983860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:16.983874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 #42 NEW cov: 12002 ft: 14476 corp: 34/226b lim: 10 exec/s: 42 rss: 71Mb L: 4/10 MS: 1 EraseBytes- 00:06:32.518 [2024-05-15 12:27:17.033955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.033980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.034034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.034047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 #43 NEW cov: 12002 ft: 14520 corp: 35/231b lim: 10 exec/s: 43 rss: 71Mb L: 5/10 MS: 1 CrossOver- 00:06:32.518 [2024-05-15 12:27:17.074374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.074403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.074454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffaa cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.074466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.074515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008787 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.074532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.074583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008787 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.074596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.074646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00008711 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.074659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.518 #44 NEW cov: 12002 ft: 14537 corp: 36/241b lim: 10 exec/s: 44 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:32.518 [2024-05-15 12:27:17.114516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffa5 cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.114541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.114590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.114603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.114653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.114666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.114714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000011ff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.114727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.518 [2024-05-15 12:27:17.114775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.518 [2024-05-15 12:27:17.114788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.776 #45 NEW cov: 12002 ft: 14559 corp: 37/251b lim: 10 exec/s: 45 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:06:32.776 [2024-05-15 12:27:17.164669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.164694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.164742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.164755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.164804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.164818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.164867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.164880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.164930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000011ff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.164943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.776 #46 NEW cov: 12002 ft: 14588 corp: 38/261b lim: 10 exec/s: 46 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:32.776 [2024-05-15 12:27:17.214811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.214835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.214886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.214900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.214950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.214978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.776 [2024-05-15 12:27:17.215028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008eff cdw11:00000000 00:06:32.776 [2024-05-15 12:27:17.215041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.777 [2024-05-15 12:27:17.215088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:32.777 [2024-05-15 12:27:17.215102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.777 #47 NEW cov: 12002 ft: 14607 corp: 39/271b lim: 10 exec/s: 47 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:32.777 [2024-05-15 12:27:17.254910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:32.777 [2024-05-15 12:27:17.254935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.777 [2024-05-15 12:27:17.254986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aa11 cdw11:00000000 00:06:32.777 [2024-05-15 12:27:17.255000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.777 [2024-05-15 12:27:17.255051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:32.777 [2024-05-15 12:27:17.255064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.777 [2024-05-15 12:27:17.255111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004330 cdw11:00000000 00:06:32.777 [2024-05-15 12:27:17.255123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.777 [2024-05-15 12:27:17.255172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff11 cdw11:00000000 00:06:32.777 [2024-05-15 12:27:17.255185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.777 #48 NEW cov: 12002 ft: 14623 corp: 40/281b lim: 10 exec/s: 24 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:32.777 #48 DONE cov: 12002 ft: 14623 corp: 40/281b lim: 10 exec/s: 24 rss: 72Mb 00:06:32.777 ###### Recommended dictionary. ###### 00:06:32.777 "\377\377\377\021" # Uses: 4 00:06:32.777 "\377\377\377\377\377\377\377\003" # Uses: 0 00:06:32.777 "\000\000\000\030" # Uses: 0 00:06:32.777 ###### End of recommended dictionary. ###### 00:06:32.777 Done 48 runs in 2 second(s) 00:06:32.777 [2024-05-15 12:27:17.284919] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.034 12:27:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:33.034 [2024-05-15 12:27:17.452804] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:33.034 [2024-05-15 12:27:17.452894] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403231 ] 00:06:33.034 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.034 [2024-05-15 12:27:17.628764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.292 [2024-05-15 12:27:17.694926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.292 [2024-05-15 12:27:17.754256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.292 [2024-05-15 12:27:17.770209] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:33.292 [2024-05-15 12:27:17.770630] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:33.292 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.292 INFO: Seed: 3708424417 00:06:33.292 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:33.292 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:33.292 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:33.292 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.292 #2 INITED exec/s: 0 rss: 63Mb 00:06:33.292 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.292 This may also happen if the target rejected all inputs we tried so far 00:06:33.292 [2024-05-15 12:27:17.846699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:33.292 [2024-05-15 12:27:17.846735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.549 NEW_FUNC[1/684]: 0x48d220 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:33.549 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.549 #5 NEW cov: 11758 ft: 11753 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 3 ChangeByte-CrossOver-InsertByte- 00:06:33.807 [2024-05-15 12:27:18.177440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ab0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.177487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.807 #9 NEW cov: 11888 ft: 12459 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 4 EraseBytes-ShuffleBytes-ChangeBit-CrossOver- 00:06:33.807 [2024-05-15 12:27:18.227440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.227467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.807 #10 NEW cov: 11894 ft: 12684 corp: 4/7b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:33.807 [2024-05-15 12:27:18.267572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.267601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.807 #11 NEW cov: 11979 ft: 12920 corp: 5/9b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:33.807 [2024-05-15 12:27:18.307907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.307936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.807 [2024-05-15 12:27:18.308049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ab0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.308066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.807 #12 NEW cov: 11979 ft: 13203 corp: 6/13b lim: 10 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CrossOver- 00:06:33.807 [2024-05-15 12:27:18.357800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.357829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.807 #13 NEW cov: 11979 ft: 13280 corp: 7/16b lim: 10 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 CrossOver- 00:06:33.807 [2024-05-15 12:27:18.407940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:33.807 [2024-05-15 12:27:18.407969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 #14 NEW cov: 11979 ft: 13346 corp: 8/18b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:06:34.065 [2024-05-15 12:27:18.448048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000abc9 cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.448076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 #15 NEW cov: 11979 ft: 13394 corp: 9/20b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeByte- 00:06:34.065 [2024-05-15 12:27:18.488222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0e cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.488253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 #16 NEW cov: 11979 ft: 13422 corp: 10/22b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:06:34.065 [2024-05-15 12:27:18.538371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.538406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 #17 NEW cov: 11979 ft: 13448 corp: 11/24b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:34.065 [2024-05-15 12:27:18.578731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.578757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.578863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ab0a cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.578881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.065 #18 NEW cov: 11979 ft: 13584 corp: 12/28b lim: 10 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:06:34.065 [2024-05-15 12:27:18.629230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.629257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.629368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.629389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.629496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.629511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.629622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008eab cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.629639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.065 #19 NEW cov: 11979 ft: 13913 corp: 13/37b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:34.065 [2024-05-15 12:27:18.669547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.669574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.669682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.669699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.669810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008383 cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.669827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.669930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008383 cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.669945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.065 [2024-05-15 12:27:18.670050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.065 [2024-05-15 12:27:18.670068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.323 #20 NEW cov: 11979 ft: 13991 corp: 14/47b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:34.323 [2024-05-15 12:27:18.709022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.709049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.323 [2024-05-15 12:27:18.709164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000abd4 cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.709181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.323 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:34.323 #21 NEW cov: 12002 ft: 14034 corp: 15/51b lim: 10 exec/s: 0 rss: 70Mb L: 4/10 MS: 1 ChangeByte- 00:06:34.323 [2024-05-15 12:27:18.748973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.749003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.323 #22 NEW cov: 12002 ft: 14105 corp: 16/53b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 CrossOver- 00:06:34.323 [2024-05-15 12:27:18.799347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ebeb cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.799373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.323 [2024-05-15 12:27:18.799490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aab cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.799506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.323 #23 NEW cov: 12002 ft: 14134 corp: 17/58b lim: 10 exec/s: 23 rss: 70Mb L: 5/10 MS: 1 CrossOver- 00:06:34.323 [2024-05-15 12:27:18.849327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.849354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.323 #24 NEW cov: 12002 ft: 14146 corp: 18/60b lim: 10 exec/s: 24 rss: 71Mb L: 2/10 MS: 1 CopyPart- 00:06:34.323 [2024-05-15 12:27:18.889631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ebeb cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.889660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.323 [2024-05-15 12:27:18.889771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ab0a cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.889788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.323 #25 NEW cov: 12002 ft: 14182 corp: 19/65b lim: 10 exec/s: 25 rss: 71Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:34.323 [2024-05-15 12:27:18.939572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.323 [2024-05-15 12:27:18.939598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.581 #26 NEW cov: 12002 ft: 14199 corp: 20/68b lim: 10 exec/s: 26 rss: 71Mb L: 3/10 MS: 1 ShuffleBytes- 00:06:34.581 [2024-05-15 12:27:18.989702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.581 [2024-05-15 12:27:18.989728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.581 #27 NEW cov: 12002 ft: 14237 corp: 21/70b lim: 10 exec/s: 27 rss: 71Mb L: 2/10 MS: 1 CopyPart- 00:06:34.581 [2024-05-15 12:27:19.039785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000028e cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.039812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.581 #28 NEW cov: 12002 ft: 14267 corp: 22/72b lim: 10 exec/s: 28 rss: 71Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:34.581 [2024-05-15 12:27:19.090367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.090403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.581 [2024-05-15 12:27:19.090515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.090531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.581 [2024-05-15 12:27:19.090643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001fc9 cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.090659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.581 #29 NEW cov: 12002 ft: 14422 corp: 23/78b lim: 10 exec/s: 29 rss: 71Mb L: 6/10 MS: 1 CMP- DE: "\000\000\000\037"- 00:06:34.581 [2024-05-15 12:27:19.140948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.140974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.581 [2024-05-15 12:27:19.141093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.141111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.581 [2024-05-15 12:27:19.141218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.141235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.581 [2024-05-15 12:27:19.141350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008eab cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.141366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.581 [2024-05-15 12:27:19.141470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ab0a cdw11:00000000 00:06:34.581 [2024-05-15 12:27:19.141486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.581 #30 NEW cov: 12002 ft: 14480 corp: 24/88b lim: 10 exec/s: 30 rss: 71Mb L: 10/10 MS: 1 InsertByte- 00:06:34.581 [2024-05-15 12:27:19.180286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.582 [2024-05-15 12:27:19.180314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 #31 NEW cov: 12002 ft: 14543 corp: 25/91b lim: 10 exec/s: 31 rss: 71Mb L: 3/10 MS: 1 CopyPart- 00:06:34.839 [2024-05-15 12:27:19.220957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.220984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.221101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e2a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.221117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.221223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.221239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.221348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008eab cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.221368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.839 #32 NEW cov: 12002 ft: 14553 corp: 26/100b lim: 10 exec/s: 32 rss: 71Mb L: 9/10 MS: 1 ChangeByte- 00:06:34.839 [2024-05-15 12:27:19.260441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b30a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.260470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 #35 NEW cov: 12002 ft: 14562 corp: 27/102b lim: 10 exec/s: 35 rss: 71Mb L: 2/10 MS: 3 EraseBytes-CopyPart-InsertByte- 00:06:34.839 [2024-05-15 12:27:19.300617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.300642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 #36 NEW cov: 12002 ft: 14596 corp: 28/104b lim: 10 exec/s: 36 rss: 71Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:34.839 [2024-05-15 12:27:19.341304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.341330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.341451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e2a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.341468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.341570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.341585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.341687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.341702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.839 #37 NEW cov: 12002 ft: 14612 corp: 29/113b lim: 10 exec/s: 37 rss: 71Mb L: 9/10 MS: 1 CrossOver- 00:06:34.839 [2024-05-15 12:27:19.390865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002b0a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.390892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 #38 NEW cov: 12002 ft: 14630 corp: 30/115b lim: 10 exec/s: 38 rss: 71Mb L: 2/10 MS: 1 ChangeBit- 00:06:34.839 [2024-05-15 12:27:19.431642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.431670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.431780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e2a cdw11:00000000 00:06:34.839 [2024-05-15 12:27:19.431798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.839 [2024-05-15 12:27:19.431904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eb0a cdw11:00000000 00:06:34.840 [2024-05-15 12:27:19.431921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.840 [2024-05-15 12:27:19.432025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a8e cdw11:00000000 00:06:34.840 [2024-05-15 12:27:19.432040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.098 #39 NEW cov: 12002 ft: 14647 corp: 31/124b lim: 10 exec/s: 39 rss: 72Mb L: 9/10 MS: 1 CrossOver- 00:06:35.098 [2024-05-15 12:27:19.481665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.481692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.481793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.481811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.481920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.481937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.482041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.482057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.098 #42 NEW cov: 12002 ft: 14659 corp: 32/132b lim: 10 exec/s: 42 rss: 72Mb L: 8/10 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:06:35.098 [2024-05-15 12:27:19.532119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.532145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.532253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e2a cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.532279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.532395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eb16 cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.532411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.532532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a8e cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.532548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.532657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00008eeb cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.532673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.098 #43 NEW cov: 12002 ft: 14663 corp: 33/142b lim: 10 exec/s: 43 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:06:35.098 [2024-05-15 12:27:19.572290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.572318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.572428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.572446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.572560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.572575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.098 [2024-05-15 12:27:19.572688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.098 [2024-05-15 12:27:19.572707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.099 [2024-05-15 12:27:19.572822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ca0a cdw11:00000000 00:06:35.099 [2024-05-15 12:27:19.572837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.099 #45 NEW cov: 12002 ft: 14666 corp: 34/152b lim: 10 exec/s: 45 rss: 72Mb L: 10/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:35.099 [2024-05-15 12:27:19.621509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000026eb cdw11:00000000 00:06:35.099 [2024-05-15 12:27:19.621534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.099 #46 NEW cov: 12002 ft: 14670 corp: 35/155b lim: 10 exec/s: 46 rss: 72Mb L: 3/10 MS: 1 InsertByte- 00:06:35.099 [2024-05-15 12:27:19.661685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000abc9 cdw11:00000000 00:06:35.099 [2024-05-15 12:27:19.661712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.099 #47 NEW cov: 12002 ft: 14674 corp: 36/157b lim: 10 exec/s: 47 rss: 72Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:35.099 [2024-05-15 12:27:19.702116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb0e cdw11:00000000 00:06:35.099 [2024-05-15 12:27:19.702143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.099 [2024-05-15 12:27:19.702265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.099 [2024-05-15 12:27:19.702282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.099 [2024-05-15 12:27:19.702400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000001f cdw11:00000000 00:06:35.099 [2024-05-15 12:27:19.702415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.358 #48 NEW cov: 12002 ft: 14690 corp: 37/163b lim: 10 exec/s: 48 rss: 72Mb L: 6/10 MS: 1 PersAutoDict- DE: "\000\000\000\037"- 00:06:35.358 [2024-05-15 12:27:19.752703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb8e cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.752731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.752853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008e8e cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.752871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.752983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002aeb cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.753001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.753107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a8e cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.753124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.753236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00008eeb cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.753252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.358 #49 NEW cov: 12002 ft: 14692 corp: 38/173b lim: 10 exec/s: 49 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:35.358 [2024-05-15 12:27:19.792826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.792855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.792968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d3ca cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.792985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.793103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.793120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.793242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000caca cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.793260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.358 [2024-05-15 12:27:19.793369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ca0a cdw11:00000000 00:06:35.358 [2024-05-15 12:27:19.793392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.358 #50 NEW cov: 12002 ft: 14698 corp: 39/183b lim: 10 exec/s: 25 rss: 72Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:35.358 #50 DONE cov: 12002 ft: 14698 corp: 39/183b lim: 10 exec/s: 25 rss: 72Mb 00:06:35.358 ###### Recommended dictionary. ###### 00:06:35.358 "\000\000\000\037" # Uses: 1 00:06:35.358 ###### End of recommended dictionary. ###### 00:06:35.358 Done 50 runs in 2 second(s) 00:06:35.358 [2024-05-15 12:27:19.824980] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.358 12:27:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:35.616 [2024-05-15 12:27:19.994587] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:35.616 [2024-05-15 12:27:19.994657] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403615 ] 00:06:35.616 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.616 [2024-05-15 12:27:20.183241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.873 [2024-05-15 12:27:20.251186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.873 [2024-05-15 12:27:20.311415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.873 [2024-05-15 12:27:20.327356] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:35.873 [2024-05-15 12:27:20.327772] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:35.873 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.873 INFO: Seed: 1971475526 00:06:35.873 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:35.873 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:35.873 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:35.873 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.873 [2024-05-15 12:27:20.393903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.873 [2024-05-15 12:27:20.393938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.873 #2 INITED cov: 11785 ft: 11783 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:35.873 [2024-05-15 12:27:20.444248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.873 [2024-05-15 12:27:20.444276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.873 [2024-05-15 12:27:20.444404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.873 [2024-05-15 12:27:20.444419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.131 NEW_FUNC[1/1]: 0x1d3be30 in _get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:332 00:06:36.131 #3 NEW cov: 11916 ft: 13256 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:36.389 [2024-05-15 12:27:20.775172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.775205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.389 [2024-05-15 12:27:20.775339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.775358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.389 #4 NEW cov: 11922 ft: 13481 corp: 3/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBinInt- 00:06:36.389 [2024-05-15 12:27:20.824913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.824941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.389 #5 NEW cov: 12007 ft: 13729 corp: 4/6b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:36.389 [2024-05-15 12:27:20.875362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.875397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.389 [2024-05-15 12:27:20.875517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.875536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.389 #6 NEW cov: 12007 ft: 13886 corp: 5/8b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:36.389 [2024-05-15 12:27:20.925245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.925275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.389 #7 NEW cov: 12007 ft: 13942 corp: 6/9b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:36.389 [2024-05-15 12:27:20.975374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.389 [2024-05-15 12:27:20.975406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.389 #8 NEW cov: 12007 ft: 13992 corp: 7/10b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:36.647 [2024-05-15 12:27:21.035874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.035899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.647 [2024-05-15 12:27:21.036033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.036051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.647 #9 NEW cov: 12007 ft: 14068 corp: 8/12b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:06:36.647 [2024-05-15 12:27:21.085989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.086016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.647 [2024-05-15 12:27:21.086144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.086161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.647 #10 NEW cov: 12007 ft: 14245 corp: 9/14b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:36.647 [2024-05-15 12:27:21.136120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.136147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.647 [2024-05-15 12:27:21.136288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.136306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.647 #11 NEW cov: 12007 ft: 14282 corp: 10/16b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBinInt- 00:06:36.647 [2024-05-15 12:27:21.185554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.185582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.647 #12 NEW cov: 12007 ft: 14332 corp: 11/17b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:06:36.647 [2024-05-15 12:27:21.246167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.647 [2024-05-15 12:27:21.246195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.905 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:36.905 #13 NEW cov: 12030 ft: 14400 corp: 12/18b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:36.905 [2024-05-15 12:27:21.286096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.905 [2024-05-15 12:27:21.286123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.905 [2024-05-15 12:27:21.286259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.905 [2024-05-15 12:27:21.286278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.905 #14 NEW cov: 12030 ft: 14415 corp: 13/20b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:06:36.905 [2024-05-15 12:27:21.336765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.905 [2024-05-15 12:27:21.336795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.905 [2024-05-15 12:27:21.336932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.905 [2024-05-15 12:27:21.336951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.905 #15 NEW cov: 12030 ft: 14433 corp: 14/22b lim: 5 exec/s: 15 rss: 71Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:36.906 [2024-05-15 12:27:21.396893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.906 [2024-05-15 12:27:21.396923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.906 [2024-05-15 12:27:21.397057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.906 [2024-05-15 12:27:21.397075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.906 #16 NEW cov: 12030 ft: 14440 corp: 15/24b lim: 5 exec/s: 16 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:06:36.906 [2024-05-15 12:27:21.447355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.906 [2024-05-15 12:27:21.447388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.906 [2024-05-15 12:27:21.447515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.906 [2024-05-15 12:27:21.447537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.906 [2024-05-15 12:27:21.447661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.906 [2024-05-15 12:27:21.447680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.906 #17 NEW cov: 12030 ft: 14670 corp: 16/27b lim: 5 exec/s: 17 rss: 71Mb L: 3/3 MS: 1 CopyPart- 00:06:36.906 [2024-05-15 12:27:21.496943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.906 [2024-05-15 12:27:21.496972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.164 #18 NEW cov: 12030 ft: 14684 corp: 17/28b lim: 5 exec/s: 18 rss: 71Mb L: 1/3 MS: 1 ChangeBit- 00:06:37.164 [2024-05-15 12:27:21.546843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.546872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.164 #19 NEW cov: 12030 ft: 14704 corp: 18/29b lim: 5 exec/s: 19 rss: 71Mb L: 1/3 MS: 1 EraseBytes- 00:06:37.164 [2024-05-15 12:27:21.607875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.607905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.164 [2024-05-15 12:27:21.608024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.608041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.164 [2024-05-15 12:27:21.608161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.608180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.164 #20 NEW cov: 12030 ft: 14737 corp: 19/32b lim: 5 exec/s: 20 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:06:37.164 [2024-05-15 12:27:21.647456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.647484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.164 [2024-05-15 12:27:21.647599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.647617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.164 [2024-05-15 12:27:21.647735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.647752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.164 #21 NEW cov: 12030 ft: 14755 corp: 20/35b lim: 5 exec/s: 21 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:06:37.164 [2024-05-15 12:27:21.687307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.687338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.164 [2024-05-15 12:27:21.687483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.687508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.164 #22 NEW cov: 12030 ft: 14783 corp: 21/37b lim: 5 exec/s: 22 rss: 71Mb L: 2/3 MS: 1 CopyPart- 00:06:37.164 [2024-05-15 12:27:21.747984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.748011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.164 [2024-05-15 12:27:21.748125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.164 [2024-05-15 12:27:21.748143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.164 #23 NEW cov: 12030 ft: 14818 corp: 22/39b lim: 5 exec/s: 23 rss: 71Mb L: 2/3 MS: 1 ChangeByte- 00:06:37.422 [2024-05-15 12:27:21.788086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.788113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.788233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.788250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.422 #24 NEW cov: 12030 ft: 14827 corp: 23/41b lim: 5 exec/s: 24 rss: 71Mb L: 2/3 MS: 1 CopyPart- 00:06:37.422 [2024-05-15 12:27:21.848253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.848279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.848407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.848424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.422 #25 NEW cov: 12030 ft: 14861 corp: 24/43b lim: 5 exec/s: 25 rss: 71Mb L: 2/3 MS: 1 ShuffleBytes- 00:06:37.422 [2024-05-15 12:27:21.887891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.887918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.888037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.888054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.422 #26 NEW cov: 12030 ft: 14918 corp: 25/45b lim: 5 exec/s: 26 rss: 71Mb L: 2/3 MS: 1 ShuffleBytes- 00:06:37.422 [2024-05-15 12:27:21.928980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.929006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.929134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.929151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.929273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.929289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.929416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.929432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.422 #27 NEW cov: 12030 ft: 15190 corp: 26/49b lim: 5 exec/s: 27 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:06:37.422 [2024-05-15 12:27:21.968717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.968743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.968860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.968877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:21.969009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:21.969026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.422 #28 NEW cov: 12030 ft: 15212 corp: 27/52b lim: 5 exec/s: 28 rss: 71Mb L: 3/4 MS: 1 CrossOver- 00:06:37.422 [2024-05-15 12:27:22.008655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:22.008683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.422 [2024-05-15 12:27:22.008813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.422 [2024-05-15 12:27:22.008830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.422 #29 NEW cov: 12030 ft: 15226 corp: 28/54b lim: 5 exec/s: 29 rss: 71Mb L: 2/4 MS: 1 CrossOver- 00:06:37.680 [2024-05-15 12:27:22.068634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.068661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.680 #30 NEW cov: 12030 ft: 15236 corp: 29/55b lim: 5 exec/s: 30 rss: 72Mb L: 1/4 MS: 1 ChangeBit- 00:06:37.680 [2024-05-15 12:27:22.128721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.128748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.680 #31 NEW cov: 12030 ft: 15241 corp: 30/56b lim: 5 exec/s: 31 rss: 72Mb L: 1/4 MS: 1 ChangeBit- 00:06:37.680 [2024-05-15 12:27:22.189274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.189306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.680 [2024-05-15 12:27:22.189440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.189458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.680 #32 NEW cov: 12030 ft: 15259 corp: 31/58b lim: 5 exec/s: 32 rss: 72Mb L: 2/4 MS: 1 InsertByte- 00:06:37.680 [2024-05-15 12:27:22.238701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.238728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.680 #33 NEW cov: 12030 ft: 15274 corp: 32/59b lim: 5 exec/s: 33 rss: 72Mb L: 1/4 MS: 1 ChangeBit- 00:06:37.680 [2024-05-15 12:27:22.290218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.290245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.680 [2024-05-15 12:27:22.290377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.290397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.680 [2024-05-15 12:27:22.290514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.290531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.680 [2024-05-15 12:27:22.290653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.290670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.680 [2024-05-15 12:27:22.290795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.680 [2024-05-15 12:27:22.290814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.939 #34 NEW cov: 12030 ft: 15350 corp: 33/64b lim: 5 exec/s: 34 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:37.939 [2024-05-15 12:27:22.349448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.939 [2024-05-15 12:27:22.349477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.939 #35 NEW cov: 12030 ft: 15358 corp: 34/65b lim: 5 exec/s: 17 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:06:37.939 #35 DONE cov: 12030 ft: 15358 corp: 34/65b lim: 5 exec/s: 17 rss: 72Mb 00:06:37.939 Done 35 runs in 2 second(s) 00:06:37.939 [2024-05-15 12:27:22.378984] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.939 12:27:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:37.939 [2024-05-15 12:27:22.545604] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:37.939 [2024-05-15 12:27:22.545675] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404054 ] 00:06:38.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.197 [2024-05-15 12:27:22.722351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.197 [2024-05-15 12:27:22.789655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.455 [2024-05-15 12:27:22.849779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.455 [2024-05-15 12:27:22.865728] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:38.455 [2024-05-15 12:27:22.866147] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:38.455 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.455 INFO: Seed: 213488919 00:06:38.455 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:38.455 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:38.455 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:38.455 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.455 [2024-05-15 12:27:22.911340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:22.911368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.455 #2 INITED cov: 11786 ft: 11787 corp: 1/1b exec/s: 0 rss: 68Mb 00:06:38.455 [2024-05-15 12:27:22.951974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:22.952000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:22.952066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:22.952080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:22.952135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:22.952148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:22.952201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:22.952214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:22.952268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:22.952281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.455 #3 NEW cov: 11916 ft: 13310 corp: 2/6b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:38.455 [2024-05-15 12:27:23.002100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:23.002125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:23.002183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:23.002197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:23.002251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:23.002280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:23.002334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:23.002347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.455 [2024-05-15 12:27:23.002405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.455 [2024-05-15 12:27:23.002419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.455 #4 NEW cov: 11922 ft: 13513 corp: 3/11b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:38.455 [2024-05-15 12:27:23.042179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.456 [2024-05-15 12:27:23.042204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.456 [2024-05-15 12:27:23.042261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.456 [2024-05-15 12:27:23.042274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.456 [2024-05-15 12:27:23.042334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.456 [2024-05-15 12:27:23.042347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.456 [2024-05-15 12:27:23.042404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.456 [2024-05-15 12:27:23.042417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.456 [2024-05-15 12:27:23.042471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.456 [2024-05-15 12:27:23.042484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.712 #5 NEW cov: 12007 ft: 13774 corp: 4/16b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:06:38.712 [2024-05-15 12:27:23.092376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.092405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.092462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.092476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.092531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.092544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.092598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.092611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.092665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.092678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.712 #6 NEW cov: 12007 ft: 13838 corp: 5/21b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:38.712 [2024-05-15 12:27:23.132489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.132514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.132586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.132600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.132655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.132668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.132728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.132742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.132796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.132809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.712 #7 NEW cov: 12007 ft: 13909 corp: 6/26b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:38.712 [2024-05-15 12:27:23.182631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.182656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.182714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.182727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.182780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.182793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.182847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.182860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.182913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.182926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.712 #8 NEW cov: 12007 ft: 13963 corp: 7/31b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:38.712 [2024-05-15 12:27:23.232775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.232800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.232874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.232889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.232941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.232954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.233008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.233021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.712 [2024-05-15 12:27:23.233078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.712 [2024-05-15 12:27:23.233091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.713 #9 NEW cov: 12007 ft: 13989 corp: 8/36b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 CopyPart- 00:06:38.713 [2024-05-15 12:27:23.282896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.713 [2024-05-15 12:27:23.282921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.713 [2024-05-15 12:27:23.282978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.713 [2024-05-15 12:27:23.282992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.713 [2024-05-15 12:27:23.283061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.713 [2024-05-15 12:27:23.283075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.713 [2024-05-15 12:27:23.283127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.713 [2024-05-15 12:27:23.283140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.713 [2024-05-15 12:27:23.283193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.713 [2024-05-15 12:27:23.283206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.713 #10 NEW cov: 12007 ft: 14010 corp: 9/41b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:38.970 [2024-05-15 12:27:23.333078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.333103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.333174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.333188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.333242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.333255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.333320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.333333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.333387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.333401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.970 #11 NEW cov: 12007 ft: 14064 corp: 10/46b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:38.970 [2024-05-15 12:27:23.373135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.373160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.373214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.373228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.373280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.373293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.373346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.373359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.373414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.373427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.970 #12 NEW cov: 12007 ft: 14077 corp: 11/51b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:38.970 [2024-05-15 12:27:23.413257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.413281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.413353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.413367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.413424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.413438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.413501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.413514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.413569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.413583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.970 #13 NEW cov: 12007 ft: 14174 corp: 12/56b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:06:38.970 [2024-05-15 12:27:23.463383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.463408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.463485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.463499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.463564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.463579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.463633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.463646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.970 [2024-05-15 12:27:23.463700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.970 [2024-05-15 12:27:23.463713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.971 #14 NEW cov: 12007 ft: 14194 corp: 13/61b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:38.971 [2024-05-15 12:27:23.503352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.503376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.503451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.503465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.503522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.503536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.503591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.503604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.971 #15 NEW cov: 12007 ft: 14214 corp: 14/65b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 EraseBytes- 00:06:38.971 [2024-05-15 12:27:23.543157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.543181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.543255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.543268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.971 #16 NEW cov: 12007 ft: 14424 corp: 15/67b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:06:38.971 [2024-05-15 12:27:23.583831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.583859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.583918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.583932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.583987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.584000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.584055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.584067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.971 [2024-05-15 12:27:23.584123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.971 [2024-05-15 12:27:23.584136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.229 #17 NEW cov: 12007 ft: 14435 corp: 16/72b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:06:39.229 [2024-05-15 12:27:23.623831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.623855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.623928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.623943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.623996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.624009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.624064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.624077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.624133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.624146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.229 #18 NEW cov: 12007 ft: 14460 corp: 17/77b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:06:39.229 [2024-05-15 12:27:23.663970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.663997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.664052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.664066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.664126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.664139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.664194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.664207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.664261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.664274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.229 #19 NEW cov: 12007 ft: 14476 corp: 18/82b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:39.229 [2024-05-15 12:27:23.714120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.714146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.714221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.714235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.714291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.714304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.714358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.714371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.714432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.714446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.229 #20 NEW cov: 12007 ft: 14484 corp: 19/87b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:06:39.229 [2024-05-15 12:27:23.754216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.754241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.754298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.754312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.754368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.754385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.754444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.754457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.754511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.754524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.229 #21 NEW cov: 12007 ft: 14514 corp: 20/92b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:06:39.229 [2024-05-15 12:27:23.794317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.794342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.794402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.794417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.794486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.794500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.794555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.794569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.229 [2024-05-15 12:27:23.794623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.229 [2024-05-15 12:27:23.794636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.486 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:39.486 #22 NEW cov: 12030 ft: 14548 corp: 21/97b lim: 5 exec/s: 22 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:06:39.744 [2024-05-15 12:27:24.105265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.105296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.105357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.105371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.105430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.105444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.105500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.105517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.105574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.105588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.744 #23 NEW cov: 12030 ft: 14615 corp: 22/102b lim: 5 exec/s: 23 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:39.744 [2024-05-15 12:27:24.155332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.155358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.155438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.155463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.155520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.155533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.155589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.155602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.155658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.155672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.744 #24 NEW cov: 12030 ft: 14645 corp: 23/107b lim: 5 exec/s: 24 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:39.744 [2024-05-15 12:27:24.205447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.205472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.205530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.205543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.205599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.205613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.205669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.205682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.744 [2024-05-15 12:27:24.205739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.744 [2024-05-15 12:27:24.205756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.744 #25 NEW cov: 12030 ft: 14687 corp: 24/112b lim: 5 exec/s: 25 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:39.744 [2024-05-15 12:27:24.255285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.255310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.255368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.255386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.255457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.255471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.745 #26 NEW cov: 12030 ft: 14865 corp: 25/115b lim: 5 exec/s: 26 rss: 71Mb L: 3/5 MS: 1 EraseBytes- 00:06:39.745 [2024-05-15 12:27:24.295686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.295711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.295786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.295801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.295856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.295869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.295926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.295940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.295994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.296008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.745 #27 NEW cov: 12030 ft: 14882 corp: 26/120b lim: 5 exec/s: 27 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:06:39.745 [2024-05-15 12:27:24.345747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.345774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.345834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.345849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.345906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.345923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.745 [2024-05-15 12:27:24.345981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.745 [2024-05-15 12:27:24.345996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 #28 NEW cov: 12030 ft: 14915 corp: 27/124b lim: 5 exec/s: 28 rss: 71Mb L: 4/5 MS: 1 EraseBytes- 00:06:40.004 [2024-05-15 12:27:24.396021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.396046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.396122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.396136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.396193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.396207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.396266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.396279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.396335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.396349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.004 #29 NEW cov: 12030 ft: 14950 corp: 28/129b lim: 5 exec/s: 29 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:06:40.004 [2024-05-15 12:27:24.436101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.436126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.436188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.436202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.436260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.436274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.436331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.436344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.436404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.436421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.004 #30 NEW cov: 12030 ft: 14977 corp: 29/134b lim: 5 exec/s: 30 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:40.004 [2024-05-15 12:27:24.476154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.476179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.476237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.476251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.476306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.476319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.476376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.476394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.476467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.476481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.004 #31 NEW cov: 12030 ft: 15020 corp: 30/139b lim: 5 exec/s: 31 rss: 71Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:40.004 [2024-05-15 12:27:24.516279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.516304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.516385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.516399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.516456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.516469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.516524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.516548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.516603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.516616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.004 #32 NEW cov: 12030 ft: 15039 corp: 31/144b lim: 5 exec/s: 32 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:40.004 [2024-05-15 12:27:24.556399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.556428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.556490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.556504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.556559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.556572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.556627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.556640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.556696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.556710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.004 #33 NEW cov: 12030 ft: 15072 corp: 32/149b lim: 5 exec/s: 33 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:40.004 [2024-05-15 12:27:24.596536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.596561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.596635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.596649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.596706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.596719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.596775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.596788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.004 [2024-05-15 12:27:24.596843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.004 [2024-05-15 12:27:24.596856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.004 #34 NEW cov: 12030 ft: 15089 corp: 33/154b lim: 5 exec/s: 34 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:40.263 [2024-05-15 12:27:24.636677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.636703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.636775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.636792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.636849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.636862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.636917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.636930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.636986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.637000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.263 #35 NEW cov: 12030 ft: 15165 corp: 34/159b lim: 5 exec/s: 35 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:40.263 [2024-05-15 12:27:24.686848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.686874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.686946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.686960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.687018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.687031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.687087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.687100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.687154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.687168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.263 #36 NEW cov: 12030 ft: 15178 corp: 35/164b lim: 5 exec/s: 36 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:06:40.263 [2024-05-15 12:27:24.736974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.736999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.737054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.737068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.737121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.737137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.737191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.737205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.737261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.737275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.263 #37 NEW cov: 12030 ft: 15184 corp: 36/169b lim: 5 exec/s: 37 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:06:40.263 [2024-05-15 12:27:24.776924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.776949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.777009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.777023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.777077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.777090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.777147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.777160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.263 #38 NEW cov: 12030 ft: 15195 corp: 37/173b lim: 5 exec/s: 38 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:40.263 [2024-05-15 12:27:24.826884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.826910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.826985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.826999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.827054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.827068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.263 #39 NEW cov: 12030 ft: 15204 corp: 38/176b lim: 5 exec/s: 39 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:06:40.263 [2024-05-15 12:27:24.877393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.877419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.877476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.877493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.877549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.877563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.877621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.263 [2024-05-15 12:27:24.877635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.263 [2024-05-15 12:27:24.877693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.264 [2024-05-15 12:27:24.877707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.521 #40 NEW cov: 12030 ft: 15220 corp: 39/181b lim: 5 exec/s: 20 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:06:40.521 #40 DONE cov: 12030 ft: 15220 corp: 39/181b lim: 5 exec/s: 20 rss: 72Mb 00:06:40.521 Done 40 runs in 2 second(s) 00:06:40.521 [2024-05-15 12:27:24.908874] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.521 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.522 12:27:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:40.522 [2024-05-15 12:27:25.073278] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:40.522 [2024-05-15 12:27:25.073355] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404591 ] 00:06:40.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.779 [2024-05-15 12:27:25.253016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.779 [2024-05-15 12:27:25.317995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.779 [2024-05-15 12:27:25.377623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.779 [2024-05-15 12:27:25.393579] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:40.779 [2024-05-15 12:27:25.394011] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:41.037 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.037 INFO: Seed: 2742499926 00:06:41.037 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:41.037 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:41.037 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:41.037 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.037 #2 INITED exec/s: 0 rss: 63Mb 00:06:41.037 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.037 This may also happen if the target rejected all inputs we tried so far 00:06:41.037 [2024-05-15 12:27:25.449125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.037 [2024-05-15 12:27:25.449152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.294 NEW_FUNC[1/682]: 0x48eb90 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:41.294 NEW_FUNC[2/682]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.294 #14 NEW cov: 11785 ft: 11786 corp: 2/11b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 CopyPart-CMP- DE: "\001\000\000\000\000\000\000?"- 00:06:41.294 [2024-05-15 12:27:25.779986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.294 [2024-05-15 12:27:25.780017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.294 NEW_FUNC[1/3]: 0x1756aa0 in nvme_complete_register_operations /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:726 00:06:41.294 NEW_FUNC[2/3]: 0x1769c00 in nvme_robust_mutex_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1153 00:06:41.294 #15 NEW cov: 11939 ft: 12330 corp: 3/21b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:06:41.294 [2024-05-15 12:27:25.830038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.295 [2024-05-15 12:27:25.830063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.295 #16 NEW cov: 11945 ft: 12690 corp: 4/31b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:06:41.295 [2024-05-15 12:27:25.870170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.295 [2024-05-15 12:27:25.870196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.295 #17 NEW cov: 12030 ft: 13011 corp: 5/41b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:06:41.552 [2024-05-15 12:27:25.920322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.552 [2024-05-15 12:27:25.920351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.552 #23 NEW cov: 12030 ft: 13086 corp: 6/51b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:41.552 [2024-05-15 12:27:25.970574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.552 [2024-05-15 12:27:25.970599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.552 [2024-05-15 12:27:25.970657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:003f0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.552 [2024-05-15 12:27:25.970671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.552 #24 NEW cov: 12030 ft: 13442 corp: 7/69b lim: 40 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:41.552 [2024-05-15 12:27:26.010560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.552 [2024-05-15 12:27:26.010586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.552 #25 NEW cov: 12030 ft: 13532 corp: 8/84b lim: 40 exec/s: 0 rss: 70Mb L: 15/18 MS: 1 CopyPart- 00:06:41.552 [2024-05-15 12:27:26.050639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.552 [2024-05-15 12:27:26.050665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.552 #26 NEW cov: 12030 ft: 13575 corp: 9/97b lim: 40 exec/s: 0 rss: 70Mb L: 13/18 MS: 1 CopyPart- 00:06:41.553 [2024-05-15 12:27:26.090747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.553 [2024-05-15 12:27:26.090772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.553 #27 NEW cov: 12030 ft: 13587 corp: 10/108b lim: 40 exec/s: 0 rss: 70Mb L: 11/18 MS: 1 CrossOver- 00:06:41.553 [2024-05-15 12:27:26.140897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00002600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.553 [2024-05-15 12:27:26.140922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.553 #33 NEW cov: 12030 ft: 13651 corp: 11/118b lim: 40 exec/s: 0 rss: 70Mb L: 10/18 MS: 1 ChangeByte- 00:06:41.810 [2024-05-15 12:27:26.181016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:17000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.181041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.810 #34 NEW cov: 12030 ft: 13696 corp: 12/129b lim: 40 exec/s: 0 rss: 70Mb L: 11/18 MS: 1 CMP- DE: "\027\000\000\000"- 00:06:41.810 [2024-05-15 12:27:26.231171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0117 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.231196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.810 #35 NEW cov: 12030 ft: 13718 corp: 13/144b lim: 40 exec/s: 0 rss: 70Mb L: 15/18 MS: 1 PersAutoDict- DE: "\027\000\000\000"- 00:06:41.810 [2024-05-15 12:27:26.281230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.281258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.810 #36 NEW cov: 12030 ft: 13741 corp: 14/157b lim: 40 exec/s: 0 rss: 71Mb L: 13/18 MS: 1 CrossOver- 00:06:41.810 [2024-05-15 12:27:26.331393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.331418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.810 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:41.810 #37 NEW cov: 12053 ft: 13775 corp: 15/169b lim: 40 exec/s: 0 rss: 71Mb L: 12/18 MS: 1 InsertByte- 00:06:41.810 [2024-05-15 12:27:26.371754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.371779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.810 [2024-05-15 12:27:26.371834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00003f3f cdw11:0a0a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.371848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.810 [2024-05-15 12:27:26.371900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.371913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.810 #38 NEW cov: 12053 ft: 14107 corp: 16/194b lim: 40 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 CopyPart- 00:06:41.810 [2024-05-15 12:27:26.411779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0117 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.411804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.810 [2024-05-15 12:27:26.411862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1700002c cdw11:000a3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.810 [2024-05-15 12:27:26.411876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.068 #39 NEW cov: 12053 ft: 14110 corp: 17/210b lim: 40 exec/s: 39 rss: 71Mb L: 16/25 MS: 1 InsertByte- 00:06:42.068 [2024-05-15 12:27:26.461994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.462018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.068 [2024-05-15 12:27:26.462094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00003f3f cdw11:0a0a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.462107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.068 [2024-05-15 12:27:26.462163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00010080 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.462176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.068 #40 NEW cov: 12053 ft: 14131 corp: 18/235b lim: 40 exec/s: 40 rss: 71Mb L: 25/25 MS: 1 ChangeBit- 00:06:42.068 [2024-05-15 12:27:26.512038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.512065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.068 [2024-05-15 12:27:26.512139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:003f0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.512154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.068 #41 NEW cov: 12053 ft: 14144 corp: 19/253b lim: 40 exec/s: 41 rss: 71Mb L: 18/25 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:42.068 [2024-05-15 12:27:26.552030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:0000f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.552054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.068 #42 NEW cov: 12053 ft: 14157 corp: 20/263b lim: 40 exec/s: 42 rss: 71Mb L: 10/25 MS: 1 ChangeBinInt- 00:06:42.068 [2024-05-15 12:27:26.582133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.582158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.068 #43 NEW cov: 12053 ft: 14175 corp: 21/276b lim: 40 exec/s: 43 rss: 71Mb L: 13/25 MS: 1 ShuffleBytes- 00:06:42.068 [2024-05-15 12:27:26.622231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.622255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.068 #44 NEW cov: 12053 ft: 14187 corp: 22/289b lim: 40 exec/s: 44 rss: 71Mb L: 13/25 MS: 1 ChangeBit- 00:06:42.068 [2024-05-15 12:27:26.672386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.068 [2024-05-15 12:27:26.672410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.326 #45 NEW cov: 12053 ft: 14211 corp: 23/300b lim: 40 exec/s: 45 rss: 71Mb L: 11/25 MS: 1 CopyPart- 00:06:42.326 [2024-05-15 12:27:26.712448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0117 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.326 [2024-05-15 12:27:26.712473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.326 #46 NEW cov: 12053 ft: 14224 corp: 24/315b lim: 40 exec/s: 46 rss: 71Mb L: 15/25 MS: 1 PersAutoDict- DE: "\027\000\000\000"- 00:06:42.326 [2024-05-15 12:27:26.752709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.326 [2024-05-15 12:27:26.752734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.326 [2024-05-15 12:27:26.752792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.326 [2024-05-15 12:27:26.752806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.326 #47 NEW cov: 12053 ft: 14272 corp: 25/336b lim: 40 exec/s: 47 rss: 71Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:06:42.326 [2024-05-15 12:27:26.792823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0117 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.326 [2024-05-15 12:27:26.792847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.327 [2024-05-15 12:27:26.792925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17001700 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.792939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.327 #48 NEW cov: 12053 ft: 14279 corp: 26/354b lim: 40 exec/s: 48 rss: 71Mb L: 18/25 MS: 1 CrossOver- 00:06:42.327 [2024-05-15 12:27:26.842868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:430a0a01 cdw11:17000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.842892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.327 #53 NEW cov: 12053 ft: 14289 corp: 27/365b lim: 40 exec/s: 53 rss: 71Mb L: 11/25 MS: 5 ShuffleBytes-CopyPart-ChangeByte-ShuffleBytes-CrossOver- 00:06:42.327 [2024-05-15 12:27:26.883085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:430a0a01 cdw11:1700000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.883109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.327 [2024-05-15 12:27:26.883185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a011700 cdw11:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.883199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.327 #54 NEW cov: 12053 ft: 14327 corp: 28/382b lim: 40 exec/s: 54 rss: 71Mb L: 17/25 MS: 1 CopyPart- 00:06:42.327 [2024-05-15 12:27:26.933358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000017 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.933389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.327 [2024-05-15 12:27:26.933446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00003f3f cdw11:0a0a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.933460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.327 [2024-05-15 12:27:26.933517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00010080 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.327 [2024-05-15 12:27:26.933530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.585 #55 NEW cov: 12053 ft: 14338 corp: 29/407b lim: 40 exec/s: 55 rss: 72Mb L: 25/25 MS: 1 PersAutoDict- DE: "\027\000\000\000"- 00:06:42.585 [2024-05-15 12:27:26.983268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:74010000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:26.983292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.585 #56 NEW cov: 12053 ft: 14354 corp: 30/417b lim: 40 exec/s: 56 rss: 72Mb L: 10/25 MS: 1 CMP- DE: "t\001\000\000"- 00:06:42.585 [2024-05-15 12:27:27.033399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a003f01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.033422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.585 #57 NEW cov: 12053 ft: 14369 corp: 31/428b lim: 40 exec/s: 57 rss: 72Mb L: 11/25 MS: 1 EraseBytes- 00:06:42.585 [2024-05-15 12:27:27.083770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01003f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.083794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.585 [2024-05-15 12:27:27.083871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0000003f cdw11:0a0a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.083885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.585 [2024-05-15 12:27:27.083940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.083953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.585 #58 NEW cov: 12053 ft: 14394 corp: 32/453b lim: 40 exec/s: 58 rss: 72Mb L: 25/25 MS: 1 ShuffleBytes- 00:06:42.585 [2024-05-15 12:27:27.123749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0117 cdw11:2c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.123773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.585 [2024-05-15 12:27:27.123829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1700002c cdw11:000a3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.123842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.585 #59 NEW cov: 12053 ft: 14412 corp: 33/469b lim: 40 exec/s: 59 rss: 72Mb L: 16/25 MS: 1 ChangeByte- 00:06:42.585 [2024-05-15 12:27:27.174007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.174032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.585 [2024-05-15 12:27:27.174106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00020a0a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.174120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.585 [2024-05-15 12:27:27.174177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:003f003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.585 [2024-05-15 12:27:27.174190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.843 #60 NEW cov: 12053 ft: 14438 corp: 34/494b lim: 40 exec/s: 60 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:06:42.843 [2024-05-15 12:27:27.223938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.223962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.843 #61 NEW cov: 12053 ft: 14443 corp: 35/505b lim: 40 exec/s: 61 rss: 72Mb L: 11/25 MS: 1 CopyPart- 00:06:42.843 [2024-05-15 12:27:27.254399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.254423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.254483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00003f3f cdw11:0a0a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.254497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.254550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.254569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.254621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:01000000 cdw11:0000003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.254635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.843 #62 NEW cov: 12053 ft: 14903 corp: 36/538b lim: 40 exec/s: 62 rss: 72Mb L: 33/33 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:42.843 [2024-05-15 12:27:27.294386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:01003f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.294410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.294482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0000003f cdw11:0a0a0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.294496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.294563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:003f003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.294576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.843 #63 NEW cov: 12053 ft: 14910 corp: 37/563b lim: 40 exec/s: 63 rss: 72Mb L: 25/33 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000?"- 00:06:42.843 [2024-05-15 12:27:27.344406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0117 cdw11:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.344431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.344504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:17001700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.344519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.843 #64 NEW cov: 12053 ft: 14911 corp: 38/582b lim: 40 exec/s: 64 rss: 72Mb L: 19/33 MS: 1 PersAutoDict- DE: "\027\000\000\000"- 00:06:42.843 [2024-05-15 12:27:27.384377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.384405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.843 #65 NEW cov: 12053 ft: 14920 corp: 39/596b lim: 40 exec/s: 65 rss: 72Mb L: 14/33 MS: 1 InsertByte- 00:06:42.843 [2024-05-15 12:27:27.424627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.424653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.843 [2024-05-15 12:27:27.424712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a0a0a01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.843 [2024-05-15 12:27:27.424726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.843 #66 NEW cov: 12053 ft: 14926 corp: 40/617b lim: 40 exec/s: 33 rss: 72Mb L: 21/33 MS: 1 CopyPart- 00:06:42.843 #66 DONE cov: 12053 ft: 14926 corp: 40/617b lim: 40 exec/s: 33 rss: 72Mb 00:06:42.843 ###### Recommended dictionary. ###### 00:06:42.843 "\001\000\000\000\000\000\000?" # Uses: 4 00:06:42.843 "\027\000\000\000" # Uses: 4 00:06:42.843 "t\001\000\000" # Uses: 0 00:06:42.843 ###### End of recommended dictionary. ###### 00:06:42.843 Done 66 runs in 2 second(s) 00:06:42.843 [2024-05-15 12:27:27.455399] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.101 12:27:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:43.101 [2024-05-15 12:27:27.622341] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:43.101 [2024-05-15 12:27:27.622425] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404960 ] 00:06:43.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.359 [2024-05-15 12:27:27.806015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.359 [2024-05-15 12:27:27.872769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.359 [2024-05-15 12:27:27.932351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.359 [2024-05-15 12:27:27.948296] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:43.359 [2024-05-15 12:27:27.948739] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:43.359 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.359 INFO: Seed: 1002527970 00:06:43.616 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:43.616 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:43.616 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:43.616 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.616 #2 INITED exec/s: 0 rss: 63Mb 00:06:43.616 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.616 This may also happen if the target rejected all inputs we tried so far 00:06:43.616 [2024-05-15 12:27:28.004311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.616 [2024-05-15 12:27:28.004338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.616 [2024-05-15 12:27:28.004418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.616 [2024-05-15 12:27:28.004433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.616 [2024-05-15 12:27:28.004495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.616 [2024-05-15 12:27:28.004509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.874 NEW_FUNC[1/686]: 0x490900 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:43.874 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.874 #5 NEW cov: 11821 ft: 11822 corp: 2/25b lim: 40 exec/s: 0 rss: 70Mb L: 24/24 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:06:43.874 [2024-05-15 12:27:28.334798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:99570786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.874 [2024-05-15 12:27:28.334829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.874 #10 NEW cov: 11951 ft: 13262 corp: 3/34b lim: 40 exec/s: 0 rss: 70Mb L: 9/24 MS: 5 ChangeByte-ChangeByte-ChangeByte-ChangeBit-CMP- DE: "SQN\231W\007\206\000"- 00:06:43.874 [2024-05-15 12:27:28.374854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:99470786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.874 [2024-05-15 12:27:28.374879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.874 #16 NEW cov: 11957 ft: 13500 corp: 4/43b lim: 40 exec/s: 0 rss: 70Mb L: 9/24 MS: 1 ChangeBit- 00:06:43.874 [2024-05-15 12:27:28.424941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:ff570786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.874 [2024-05-15 12:27:28.424966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.874 #17 NEW cov: 12042 ft: 13788 corp: 5/52b lim: 40 exec/s: 0 rss: 70Mb L: 9/24 MS: 1 ChangeByte- 00:06:43.874 [2024-05-15 12:27:28.465163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:53514e99 cdw11:57078600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.874 [2024-05-15 12:27:28.465187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.874 #18 NEW cov: 12042 ft: 13851 corp: 6/61b lim: 40 exec/s: 0 rss: 70Mb L: 9/24 MS: 1 PersAutoDict- DE: "SQN\231W\007\206\000"- 00:06:44.131 [2024-05-15 12:27:28.505218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b7539947 cdw11:07514eff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.131 [2024-05-15 12:27:28.505243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.131 #19 NEW cov: 12042 ft: 13913 corp: 7/73b lim: 40 exec/s: 0 rss: 70Mb L: 12/24 MS: 1 CrossOver- 00:06:44.131 [2024-05-15 12:27:28.555395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:53514f99 cdw11:57078600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.131 [2024-05-15 12:27:28.555422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.131 #20 NEW cov: 12042 ft: 13962 corp: 8/82b lim: 40 exec/s: 0 rss: 70Mb L: 9/24 MS: 1 ChangeBit- 00:06:44.131 [2024-05-15 12:27:28.605805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.131 [2024-05-15 12:27:28.605830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.131 [2024-05-15 12:27:28.605905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.131 [2024-05-15 12:27:28.605920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.132 [2024-05-15 12:27:28.605977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.605991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.132 #21 NEW cov: 12042 ft: 13983 corp: 9/106b lim: 40 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 CMP- DE: "\377\001\000\000"- 00:06:44.132 [2024-05-15 12:27:28.656032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.656058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.132 [2024-05-15 12:27:28.656118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:32ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.656133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.132 [2024-05-15 12:27:28.656191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.656205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.132 #22 NEW cov: 12042 ft: 14037 corp: 10/131b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 InsertByte- 00:06:44.132 [2024-05-15 12:27:28.705790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:53514e99 cdw11:d7078600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.705815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.132 #23 NEW cov: 12042 ft: 14144 corp: 11/140b lim: 40 exec/s: 0 rss: 70Mb L: 9/25 MS: 1 ChangeBit- 00:06:44.132 [2024-05-15 12:27:28.746284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.746310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.132 [2024-05-15 12:27:28.746373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fffffffe cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.746392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.132 [2024-05-15 12:27:28.746454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.132 [2024-05-15 12:27:28.746468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.389 #24 NEW cov: 12042 ft: 14178 corp: 12/164b lim: 40 exec/s: 0 rss: 70Mb L: 24/25 MS: 1 ChangeBit- 00:06:44.389 [2024-05-15 12:27:28.786027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9953b757 cdw11:4e510786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.786053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.389 #25 NEW cov: 12042 ft: 14187 corp: 13/173b lim: 40 exec/s: 0 rss: 70Mb L: 9/25 MS: 1 ShuffleBytes- 00:06:44.389 [2024-05-15 12:27:28.826538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:fff6ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.826562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.389 [2024-05-15 12:27:28.826638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.826652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.389 [2024-05-15 12:27:28.826710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.826723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.389 #26 NEW cov: 12042 ft: 14241 corp: 14/197b lim: 40 exec/s: 0 rss: 70Mb L: 24/25 MS: 1 ChangeBinInt- 00:06:44.389 [2024-05-15 12:27:28.866265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:ff570786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.866290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.389 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:44.389 #27 NEW cov: 12065 ft: 14368 corp: 15/207b lim: 40 exec/s: 0 rss: 70Mb L: 10/25 MS: 1 InsertByte- 00:06:44.389 [2024-05-15 12:27:28.906542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:00860757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.906567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.389 [2024-05-15 12:27:28.906639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ee94e66a cdw11:99570786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.389 [2024-05-15 12:27:28.906654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.390 #28 NEW cov: 12065 ft: 14652 corp: 16/224b lim: 40 exec/s: 0 rss: 70Mb L: 17/25 MS: 1 CMP- DE: "\000\206\007W\356\224\346j"- 00:06:44.390 [2024-05-15 12:27:28.946508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b7ff0100 cdw11:0053514e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.390 [2024-05-15 12:27:28.946534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.390 #29 NEW cov: 12065 ft: 14656 corp: 17/238b lim: 40 exec/s: 0 rss: 70Mb L: 14/25 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:06:44.390 [2024-05-15 12:27:28.996980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.390 [2024-05-15 12:27:28.997006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.390 [2024-05-15 12:27:28.997069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.390 [2024-05-15 12:27:28.997086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.390 [2024-05-15 12:27:28.997147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:3effff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.390 [2024-05-15 12:27:28.997160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.647 #30 NEW cov: 12065 ft: 14714 corp: 18/263b lim: 40 exec/s: 30 rss: 70Mb L: 25/25 MS: 1 InsertByte- 00:06:44.647 [2024-05-15 12:27:29.036776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9953c3b7 cdw11:574e5107 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.036802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.647 #31 NEW cov: 12065 ft: 14747 corp: 19/273b lim: 40 exec/s: 31 rss: 70Mb L: 10/25 MS: 1 InsertByte- 00:06:44.647 [2024-05-15 12:27:29.086904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff010000 cdw11:99470786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.086930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.647 #32 NEW cov: 12065 ft: 14780 corp: 20/282b lim: 40 exec/s: 32 rss: 71Mb L: 9/25 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:06:44.647 [2024-05-15 12:27:29.137374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.137404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.647 [2024-05-15 12:27:29.137462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.137476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.647 [2024-05-15 12:27:29.137551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.137564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.647 #33 NEW cov: 12065 ft: 14809 corp: 21/307b lim: 40 exec/s: 33 rss: 71Mb L: 25/25 MS: 1 InsertByte- 00:06:44.647 [2024-05-15 12:27:29.177542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.177566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.647 [2024-05-15 12:27:29.177626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:32ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.177640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.647 [2024-05-15 12:27:29.177696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:a3ffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.177709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.647 #34 NEW cov: 12065 ft: 14860 corp: 22/332b lim: 40 exec/s: 34 rss: 71Mb L: 25/25 MS: 1 ChangeByte- 00:06:44.647 [2024-05-15 12:27:29.227630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.227655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.647 [2024-05-15 12:27:29.227719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:3cffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.227733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.647 [2024-05-15 12:27:29.227790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.647 [2024-05-15 12:27:29.227803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.647 #35 NEW cov: 12065 ft: 14866 corp: 23/356b lim: 40 exec/s: 35 rss: 71Mb L: 24/25 MS: 1 ChangeByte- 00:06:44.905 [2024-05-15 12:27:29.267735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.267760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.267823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:32f7ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.267837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.267898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.267911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.905 #36 NEW cov: 12065 ft: 14914 corp: 24/381b lim: 40 exec/s: 36 rss: 71Mb L: 25/25 MS: 1 ChangeBit- 00:06:44.905 [2024-05-15 12:27:29.307493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9953c3b7 cdw11:574e513a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.307517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.905 #37 NEW cov: 12065 ft: 14920 corp: 25/394b lim: 40 exec/s: 37 rss: 71Mb L: 13/25 MS: 1 InsertRepeatedBytes- 00:06:44.905 [2024-05-15 12:27:29.357998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fffffdff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.358023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.358102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:32ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.358117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.358176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:a3ffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.358189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.905 #38 NEW cov: 12065 ft: 14929 corp: 26/419b lim: 40 exec/s: 38 rss: 71Mb L: 25/25 MS: 1 ChangeBit- 00:06:44.905 [2024-05-15 12:27:29.407779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:53514e99 cdw11:76d70786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.407804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.905 #39 NEW cov: 12065 ft: 14943 corp: 27/429b lim: 40 exec/s: 39 rss: 71Mb L: 10/25 MS: 1 InsertByte- 00:06:44.905 [2024-05-15 12:27:29.458301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.458329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.458402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:32ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.458417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.458475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff32 cdw11:ffa3ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.458489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.905 #40 NEW cov: 12065 ft: 14948 corp: 28/455b lim: 40 exec/s: 40 rss: 71Mb L: 26/26 MS: 1 InsertByte- 00:06:44.905 [2024-05-15 12:27:29.498568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:53514e99 cdw11:76d70786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.498593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.498669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e5e5e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.498684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.498743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e5e5e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.498756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.905 [2024-05-15 12:27:29.498812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e5000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.905 [2024-05-15 12:27:29.498826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.163 #41 NEW cov: 12065 ft: 15255 corp: 29/487b lim: 40 exec/s: 41 rss: 71Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:45.163 [2024-05-15 12:27:29.548566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.548591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.548667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0100fcfe cdw11:32ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.548681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.548738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:a3ffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.548751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.163 #42 NEW cov: 12065 ft: 15279 corp: 30/512b lim: 40 exec/s: 42 rss: 71Mb L: 25/32 MS: 1 ChangeBinInt- 00:06:45.163 [2024-05-15 12:27:29.588601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.588626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.588702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.588720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.588779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.588792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.163 #43 NEW cov: 12065 ft: 15305 corp: 31/537b lim: 40 exec/s: 43 rss: 71Mb L: 25/32 MS: 1 ShuffleBytes- 00:06:45.163 [2024-05-15 12:27:29.638796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.638821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.638896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.638910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.638966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.638980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.163 #44 NEW cov: 12065 ft: 15307 corp: 32/561b lim: 40 exec/s: 44 rss: 71Mb L: 24/32 MS: 1 CopyPart- 00:06:45.163 [2024-05-15 12:27:29.678569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:97ff0100 cdw11:0053514e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.678593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.163 #45 NEW cov: 12065 ft: 15327 corp: 33/575b lim: 40 exec/s: 45 rss: 72Mb L: 14/32 MS: 1 ChangeBit- 00:06:45.163 [2024-05-15 12:27:29.728870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:00860757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.728894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.163 [2024-05-15 12:27:29.728971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ee6c156a cdw11:99570786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.728985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.163 #46 NEW cov: 12065 ft: 15330 corp: 34/592b lim: 40 exec/s: 46 rss: 72Mb L: 17/32 MS: 1 ChangeBinInt- 00:06:45.163 [2024-05-15 12:27:29.778880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff010000 cdw11:9947ff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.163 [2024-05-15 12:27:29.778906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.421 #47 NEW cov: 12065 ft: 15367 corp: 35/605b lim: 40 exec/s: 47 rss: 72Mb L: 13/32 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:06:45.421 [2024-05-15 12:27:29.829356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.829391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.829452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.829469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.829529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:53514e99 cdw11:57078600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.829542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.421 #48 NEW cov: 12065 ft: 15380 corp: 36/630b lim: 40 exec/s: 48 rss: 72Mb L: 25/32 MS: 1 PersAutoDict- DE: "SQN\231W\007\206\000"- 00:06:45.421 [2024-05-15 12:27:29.879502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.879527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.879602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:010019fe cdw11:32ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.879616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.879675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:a3ffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.879688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.421 #49 NEW cov: 12065 ft: 15395 corp: 37/655b lim: 40 exec/s: 49 rss: 72Mb L: 25/32 MS: 1 ChangeByte- 00:06:45.421 [2024-05-15 12:27:29.929756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:53514e99 cdw11:76d70786 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.929781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.929859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e5e5e520 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.929873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.929933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000e5 cdw11:e5e5e5e5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.929947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.421 [2024-05-15 12:27:29.930002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:e5e5000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.930016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.421 #55 NEW cov: 12065 ft: 15464 corp: 38/687b lim: 40 exec/s: 55 rss: 72Mb L: 32/32 MS: 1 ChangeBinInt- 00:06:45.421 [2024-05-15 12:27:29.979433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b753514e cdw11:0aff5707 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.421 [2024-05-15 12:27:29.979458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.421 #56 NEW cov: 12065 ft: 15474 corp: 39/697b lim: 40 exec/s: 28 rss: 72Mb L: 10/32 MS: 1 CrossOver- 00:06:45.421 #56 DONE cov: 12065 ft: 15474 corp: 39/697b lim: 40 exec/s: 28 rss: 72Mb 00:06:45.421 ###### Recommended dictionary. ###### 00:06:45.421 "SQN\231W\007\206\000" # Uses: 2 00:06:45.421 "\377\001\000\000" # Uses: 3 00:06:45.421 "\000\206\007W\356\224\346j" # Uses: 0 00:06:45.421 ###### End of recommended dictionary. ###### 00:06:45.421 Done 56 runs in 2 second(s) 00:06:45.421 [2024-05-15 12:27:29.999863] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.679 12:27:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:45.679 [2024-05-15 12:27:30.157391] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:45.679 [2024-05-15 12:27:30.157459] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405410 ] 00:06:45.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.936 [2024-05-15 12:27:30.330507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.936 [2024-05-15 12:27:30.397122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.936 [2024-05-15 12:27:30.457286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.936 [2024-05-15 12:27:30.473262] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:45.936 [2024-05-15 12:27:30.473691] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:45.936 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.936 INFO: Seed: 3527556300 00:06:45.936 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:45.936 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:45.936 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:45.936 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.936 #2 INITED exec/s: 0 rss: 64Mb 00:06:45.936 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:45.936 This may also happen if the target rejected all inputs we tried so far 00:06:45.936 [2024-05-15 12:27:30.539315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.936 [2024-05-15 12:27:30.539346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.936 [2024-05-15 12:27:30.539407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.936 [2024-05-15 12:27:30.539421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.936 [2024-05-15 12:27:30.539478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.936 [2024-05-15 12:27:30.539491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.936 [2024-05-15 12:27:30.539544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.936 [2024-05-15 12:27:30.539557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.452 NEW_FUNC[1/685]: 0x492670 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:46.452 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.452 #14 NEW cov: 11817 ft: 11818 corp: 2/39b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:46.452 [2024-05-15 12:27:30.870460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.870517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.452 [2024-05-15 12:27:30.870603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.870630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.452 [2024-05-15 12:27:30.870712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.870738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.452 [2024-05-15 12:27:30.870818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.870843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.452 NEW_FUNC[1/1]: 0x17b2b40 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1494 00:06:46.452 #25 NEW cov: 11949 ft: 12405 corp: 3/77b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ChangeBinInt- 00:06:46.452 [2024-05-15 12:27:30.930352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.930385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.452 [2024-05-15 12:27:30.930462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.930477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.452 [2024-05-15 12:27:30.930535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.930552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.452 [2024-05-15 12:27:30.930609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.452 [2024-05-15 12:27:30.930623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.453 #26 NEW cov: 11955 ft: 12777 corp: 4/115b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 CrossOver- 00:06:46.453 [2024-05-15 12:27:30.980457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:30.980481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:30.980555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:30.980569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:30.980625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:30.980639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:30.980699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:30.980713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.453 #27 NEW cov: 12040 ft: 13052 corp: 5/153b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 CrossOver- 00:06:46.453 [2024-05-15 12:27:31.020670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.020696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:31.020769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.020784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:31.020842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.020855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:31.020909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.020923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.453 #28 NEW cov: 12040 ft: 13141 corp: 6/191b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 CrossOver- 00:06:46.453 [2024-05-15 12:27:31.060699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.060724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:31.060782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.060799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:31.060856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57574757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.060870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.453 [2024-05-15 12:27:31.060925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.453 [2024-05-15 12:27:31.060938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.711 #29 NEW cov: 12040 ft: 13183 corp: 7/229b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 ChangeBit- 00:06:46.711 [2024-05-15 12:27:31.110675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.711 [2024-05-15 12:27:31.110701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.711 [2024-05-15 12:27:31.110779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.711 [2024-05-15 12:27:31.110793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.711 [2024-05-15 12:27:31.110851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.711 [2024-05-15 12:27:31.110864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.711 #30 NEW cov: 12040 ft: 13611 corp: 8/255b lim: 40 exec/s: 0 rss: 71Mb L: 26/38 MS: 1 EraseBytes- 00:06:46.711 [2024-05-15 12:27:31.161004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.161029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.161104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.161118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.161175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.161189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.161248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.161262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.712 #31 NEW cov: 12040 ft: 13706 corp: 9/293b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 ChangeBinInt- 00:06:46.712 [2024-05-15 12:27:31.201046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.201071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.201146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.201163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.201223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.201236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.201293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57571757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.201307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.712 #32 NEW cov: 12040 ft: 13794 corp: 10/331b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 ChangeBit- 00:06:46.712 [2024-05-15 12:27:31.251042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.251066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.251144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.251157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.251216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.251229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.712 #33 NEW cov: 12040 ft: 13903 corp: 11/357b lim: 40 exec/s: 0 rss: 71Mb L: 26/38 MS: 1 ShuffleBytes- 00:06:46.712 [2024-05-15 12:27:31.301022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.301047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.712 [2024-05-15 12:27:31.301123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:5757570a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.712 [2024-05-15 12:27:31.301137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.712 #39 NEW cov: 12040 ft: 14174 corp: 12/373b lim: 40 exec/s: 0 rss: 71Mb L: 16/38 MS: 1 CrossOver- 00:06:46.970 [2024-05-15 12:27:31.341439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.970 [2024-05-15 12:27:31.341465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.970 [2024-05-15 12:27:31.341524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.970 [2024-05-15 12:27:31.341538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.970 [2024-05-15 12:27:31.341594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.341608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.341664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57570057 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.341682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.971 #40 NEW cov: 12040 ft: 14183 corp: 13/412b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 InsertByte- 00:06:46.971 [2024-05-15 12:27:31.381395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57ff5757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.381420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.381505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.381519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.381575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.381588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.971 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:46.971 #41 NEW cov: 12063 ft: 14203 corp: 14/438b lim: 40 exec/s: 0 rss: 71Mb L: 26/39 MS: 1 ChangeByte- 00:06:46.971 [2024-05-15 12:27:31.431408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57579557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.431433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.431493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:5757570a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.431507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.971 #42 NEW cov: 12063 ft: 14242 corp: 15/454b lim: 40 exec/s: 0 rss: 72Mb L: 16/39 MS: 1 ChangeByte- 00:06:46.971 [2024-05-15 12:27:31.481575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.481601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.481660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:5757570a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.481675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.971 #43 NEW cov: 12063 ft: 14266 corp: 16/470b lim: 40 exec/s: 0 rss: 72Mb L: 16/39 MS: 1 CopyPart- 00:06:46.971 [2024-05-15 12:27:31.522032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.522056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.522133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.522148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.522206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:58575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.522222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.522279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.522292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.971 #44 NEW cov: 12063 ft: 14292 corp: 17/508b lim: 40 exec/s: 44 rss: 72Mb L: 38/39 MS: 1 ChangeBinInt- 00:06:46.971 [2024-05-15 12:27:31.571845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57578989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.571870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.971 [2024-05-15 12:27:31.571945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.971 [2024-05-15 12:27:31.571960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.229 #46 NEW cov: 12063 ft: 14325 corp: 18/527b lim: 40 exec/s: 46 rss: 72Mb L: 19/39 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:47.229 [2024-05-15 12:27:31.611935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.229 [2024-05-15 12:27:31.611960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.612038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.612052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.230 #47 NEW cov: 12063 ft: 14336 corp: 19/543b lim: 40 exec/s: 47 rss: 72Mb L: 16/39 MS: 1 CrossOver- 00:06:47.230 [2024-05-15 12:27:31.662067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57578989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.662091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.662167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:89899989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.662181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.230 #48 NEW cov: 12063 ft: 14417 corp: 20/562b lim: 40 exec/s: 48 rss: 72Mb L: 19/39 MS: 1 ChangeBit- 00:06:47.230 [2024-05-15 12:27:31.712515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.712541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.712599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.712613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.712687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.712701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.712759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.712775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.230 #49 NEW cov: 12063 ft: 14426 corp: 21/600b lim: 40 exec/s: 49 rss: 72Mb L: 38/39 MS: 1 CrossOver- 00:06:47.230 [2024-05-15 12:27:31.752783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.752809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.752868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.752881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.752939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.752952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.753009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57571757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.753022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.753080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.753093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.230 #50 NEW cov: 12063 ft: 14479 corp: 22/640b lim: 40 exec/s: 50 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:47.230 [2024-05-15 12:27:31.802777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.802801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.802862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.802876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.802931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.802944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.803000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.803014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.230 #51 NEW cov: 12063 ft: 14487 corp: 23/678b lim: 40 exec/s: 51 rss: 72Mb L: 38/40 MS: 1 ShuffleBytes- 00:06:47.230 [2024-05-15 12:27:31.842630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57578989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.842657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.230 [2024-05-15 12:27:31.842723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.230 [2024-05-15 12:27:31.842737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 #52 NEW cov: 12063 ft: 14517 corp: 24/697b lim: 40 exec/s: 52 rss: 72Mb L: 19/40 MS: 1 ShuffleBytes- 00:06:47.489 [2024-05-15 12:27:31.882864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.882889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:31.882950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.882963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:31.883022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.883035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.489 #53 NEW cov: 12063 ft: 14551 corp: 25/724b lim: 40 exec/s: 53 rss: 72Mb L: 27/40 MS: 1 CrossOver- 00:06:47.489 [2024-05-15 12:27:31.923317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.923342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:31.923415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.923430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:31.923496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.923509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:31.923563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.923577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:31.923633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.923647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.489 #54 NEW cov: 12063 ft: 14565 corp: 26/764b lim: 40 exec/s: 54 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:47.489 [2024-05-15 12:27:31.962790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:31.962814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 #60 NEW cov: 12063 ft: 15298 corp: 27/773b lim: 40 exec/s: 60 rss: 72Mb L: 9/40 MS: 1 CrossOver- 00:06:47.489 [2024-05-15 12:27:32.003389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.003413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.003475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5757575f cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.003489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.003545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.003559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.003616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.003629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.489 #61 NEW cov: 12063 ft: 15308 corp: 28/811b lim: 40 exec/s: 61 rss: 72Mb L: 38/40 MS: 1 ChangeBit- 00:06:47.489 [2024-05-15 12:27:32.043481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575760 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.043507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.043569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.043583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.043642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.043656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.043712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575755 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.043726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.489 #62 NEW cov: 12063 ft: 15354 corp: 29/850b lim: 40 exec/s: 62 rss: 73Mb L: 39/40 MS: 1 InsertByte- 00:06:47.489 [2024-05-15 12:27:32.093627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.093653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.093726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.093740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.093795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.093808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.489 [2024-05-15 12:27:32.093866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.489 [2024-05-15 12:27:32.093880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.747 #63 NEW cov: 12063 ft: 15395 corp: 30/888b lim: 40 exec/s: 63 rss: 73Mb L: 38/40 MS: 1 ShuffleBytes- 00:06:47.748 [2024-05-15 12:27:32.133735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.133761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.133820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.133834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.133894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.133907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.133966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.133979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.748 #64 NEW cov: 12063 ft: 15402 corp: 31/926b lim: 40 exec/s: 64 rss: 73Mb L: 38/40 MS: 1 CopyPart- 00:06:47.748 [2024-05-15 12:27:32.173832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.173858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.173919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:5f575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.173932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.173989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.174003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.174061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575755 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.174074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.748 #65 NEW cov: 12063 ft: 15418 corp: 32/965b lim: 40 exec/s: 65 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:06:47.748 [2024-05-15 12:27:32.223703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.223728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.223804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.223818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.748 #66 NEW cov: 12063 ft: 15431 corp: 33/987b lim: 40 exec/s: 66 rss: 73Mb L: 22/40 MS: 1 EraseBytes- 00:06:47.748 [2024-05-15 12:27:32.274141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.274165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.274241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57555757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.274255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.274310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:58575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.274323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.274384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575557 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.274397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.748 #67 NEW cov: 12063 ft: 15445 corp: 34/1025b lim: 40 exec/s: 67 rss: 73Mb L: 38/40 MS: 1 ChangeBit- 00:06:47.748 [2024-05-15 12:27:32.314065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5757571d cdw11:1d1d1d1d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.314090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.314148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1d1d1d1d cdw11:1d1d1d1d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.314161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.314215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1d1d1d1d cdw11:1d575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.314228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.748 #68 NEW cov: 12063 ft: 15449 corp: 35/1052b lim: 40 exec/s: 68 rss: 73Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:06:47.748 [2024-05-15 12:27:32.364454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.364480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.364539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.364553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.364612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.364625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.748 [2024-05-15 12:27:32.364682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575755 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.748 [2024-05-15 12:27:32.364696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.007 #69 NEW cov: 12063 ft: 15453 corp: 36/1091b lim: 40 exec/s: 69 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:06:48.007 [2024-05-15 12:27:32.404493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.404522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 [2024-05-15 12:27:32.404579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.404593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 [2024-05-15 12:27:32.404651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.404664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.007 [2024-05-15 12:27:32.404719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:57575755 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.404732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.007 #70 NEW cov: 12063 ft: 15478 corp: 37/1130b lim: 40 exec/s: 70 rss: 73Mb L: 39/40 MS: 1 CopyPart- 00:06:48.007 [2024-05-15 12:27:32.454630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.454655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 [2024-05-15 12:27:32.454711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:57575757 cdw11:57575757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.454725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 [2024-05-15 12:27:32.454782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:5757eb2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.454795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.007 [2024-05-15 12:27:32.454851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:78fd5907 cdw11:86005757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.454865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.007 #71 NEW cov: 12063 ft: 15491 corp: 38/1165b lim: 40 exec/s: 71 rss: 73Mb L: 35/40 MS: 1 CMP- DE: "\353,x\375Y\007\206\000"- 00:06:48.007 [2024-05-15 12:27:32.504320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:57575757 cdw11:57d75757 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.007 [2024-05-15 12:27:32.504345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 #72 NEW cov: 12063 ft: 15504 corp: 39/1174b lim: 40 exec/s: 36 rss: 73Mb L: 9/40 MS: 1 ChangeBit- 00:06:48.007 #72 DONE cov: 12063 ft: 15504 corp: 39/1174b lim: 40 exec/s: 36 rss: 73Mb 00:06:48.007 ###### Recommended dictionary. ###### 00:06:48.007 "\353,x\375Y\007\206\000" # Uses: 0 00:06:48.007 ###### End of recommended dictionary. ###### 00:06:48.007 Done 72 runs in 2 second(s) 00:06:48.007 [2024-05-15 12:27:32.528176] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.266 12:27:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:48.266 [2024-05-15 12:27:32.697185] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:48.266 [2024-05-15 12:27:32.697256] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405939 ] 00:06:48.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.266 [2024-05-15 12:27:32.871500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.525 [2024-05-15 12:27:32.942125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.525 [2024-05-15 12:27:33.001704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.525 [2024-05-15 12:27:33.017643] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:48.525 [2024-05-15 12:27:33.018073] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:48.525 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.525 INFO: Seed: 1776576560 00:06:48.525 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:48.525 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:48.525 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:48.525 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.525 #2 INITED exec/s: 0 rss: 63Mb 00:06:48.525 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:48.525 This may also happen if the target rejected all inputs we tried so far 00:06:48.525 [2024-05-15 12:27:33.073534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.525 [2024-05-15 12:27:33.073562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.525 [2024-05-15 12:27:33.073636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.525 [2024-05-15 12:27:33.073651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.525 [2024-05-15 12:27:33.073711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.525 [2024-05-15 12:27:33.073725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.783 NEW_FUNC[1/685]: 0x494230 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:48.783 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:48.783 #10 NEW cov: 11807 ft: 11801 corp: 2/31b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:06:49.041 [2024-05-15 12:27:33.404304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.404338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.404421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.404436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.404493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00001e31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.404507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.041 #11 NEW cov: 11937 ft: 12403 corp: 3/61b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:49.041 [2024-05-15 12:27:33.454366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.454395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.454450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.454464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.454515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.454528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.041 #12 NEW cov: 11943 ft: 12621 corp: 4/91b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CrossOver- 00:06:49.041 [2024-05-15 12:27:33.494343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.494368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.494454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.494469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.041 #13 NEW cov: 12028 ft: 13205 corp: 5/107b lim: 40 exec/s: 0 rss: 70Mb L: 16/30 MS: 1 EraseBytes- 00:06:49.041 [2024-05-15 12:27:33.544703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.544734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.544790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.544804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.544857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00001e31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.544870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.041 #14 NEW cov: 12028 ft: 13243 corp: 6/137b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:49.041 [2024-05-15 12:27:33.594764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.594789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.594862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.594876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.594931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.594944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.041 #15 NEW cov: 12028 ft: 13287 corp: 7/167b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:49.041 [2024-05-15 12:27:33.634900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.634926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.634984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.634998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.041 [2024-05-15 12:27:33.635052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.041 [2024-05-15 12:27:33.635066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.041 #16 NEW cov: 12028 ft: 13340 corp: 8/197b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:49.315 [2024-05-15 12:27:33.674938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.674964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.675024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.675038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.315 #17 NEW cov: 12028 ft: 13394 corp: 9/214b lim: 40 exec/s: 0 rss: 70Mb L: 17/30 MS: 1 InsertByte- 00:06:49.315 [2024-05-15 12:27:33.725172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.725197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.725272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3131d5ce cdw11:cececece SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.725286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.725341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cece3131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.725355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.315 #18 NEW cov: 12028 ft: 13423 corp: 10/244b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:49.315 [2024-05-15 12:27:33.765521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.765547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.765604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.765617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.765671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00001e31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.765685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.765739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:e4e4e4e4 cdw11:e4e4e4e4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.765751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.765806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:e4e43131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.765819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.315 #19 NEW cov: 12028 ft: 13921 corp: 11/284b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:49.315 [2024-05-15 12:27:33.815176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:40ff033d cdw11:310a3131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.815201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.315 #23 NEW cov: 12028 ft: 14248 corp: 12/295b lim: 40 exec/s: 0 rss: 70Mb L: 11/40 MS: 4 ChangeByte-CMP-EraseBytes-CrossOver- DE: "\377\377\377="- 00:06:49.315 [2024-05-15 12:27:33.855554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.855579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.855636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3131d5ce cdw11:cececece SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.855654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.855709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cece3131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.855722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.315 #24 NEW cov: 12028 ft: 14273 corp: 13/325b lim: 40 exec/s: 0 rss: 70Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:49.315 [2024-05-15 12:27:33.905728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.905754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.905813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f313131 cdw11:03313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.905827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.905882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.905895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.315 [2024-05-15 12:27:33.905949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31310000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.315 [2024-05-15 12:27:33.905962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.587 #25 NEW cov: 12028 ft: 14305 corp: 14/364b lim: 40 exec/s: 0 rss: 70Mb L: 39/40 MS: 1 CrossOver- 00:06:49.587 [2024-05-15 12:27:33.955585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:33.955611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.587 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:49.587 #26 NEW cov: 12051 ft: 14363 corp: 15/379b lim: 40 exec/s: 0 rss: 70Mb L: 15/40 MS: 1 EraseBytes- 00:06:49.587 [2024-05-15 12:27:33.996025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:318a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:33.996051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:33.996108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:33.996122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:33.996175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:33.996189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:33.996241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:33.996254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.587 #27 NEW cov: 12051 ft: 14367 corp: 16/416b lim: 40 exec/s: 0 rss: 70Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:06:49.587 [2024-05-15 12:27:34.046169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.046194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.046267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f3131ff cdw11:ffff3d31 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.046281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.046333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.046346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.046399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.046412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.587 #28 NEW cov: 12051 ft: 14403 corp: 17/450b lim: 40 exec/s: 28 rss: 71Mb L: 34/40 MS: 1 PersAutoDict- DE: "\377\377\377="- 00:06:49.587 [2024-05-15 12:27:34.086065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a29 cdw11:31313141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.086091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.086163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.086177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.587 #29 NEW cov: 12051 ft: 14434 corp: 18/467b lim: 40 exec/s: 29 rss: 71Mb L: 17/40 MS: 1 ChangeByte- 00:06:49.587 [2024-05-15 12:27:34.136216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.136241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.136297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f313131 cdw11:03310000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.136311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.587 #30 NEW cov: 12051 ft: 14474 corp: 19/489b lim: 40 exec/s: 30 rss: 71Mb L: 22/40 MS: 1 EraseBytes- 00:06:49.587 [2024-05-15 12:27:34.186476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.186500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.186553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.186566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.587 [2024-05-15 12:27:34.186620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313171 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.587 [2024-05-15 12:27:34.186638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.846 #31 NEW cov: 12051 ft: 14484 corp: 20/519b lim: 40 exec/s: 31 rss: 71Mb L: 30/40 MS: 1 ChangeBit- 00:06:49.846 [2024-05-15 12:27:34.226462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03312931 cdw11:310a3141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.226487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.226559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.226572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.846 #32 NEW cov: 12051 ft: 14508 corp: 21/536b lim: 40 exec/s: 32 rss: 71Mb L: 17/40 MS: 1 ShuffleBytes- 00:06:49.846 [2024-05-15 12:27:34.276490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:40ff033d cdw11:310a3131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.276515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.846 #38 NEW cov: 12051 ft: 14514 corp: 22/547b lim: 40 exec/s: 38 rss: 71Mb L: 11/40 MS: 1 ChangeBit- 00:06:49.846 [2024-05-15 12:27:34.326820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.326844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.326917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.326932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.326988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31623131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.327001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.846 #39 NEW cov: 12051 ft: 14558 corp: 23/577b lim: 40 exec/s: 39 rss: 71Mb L: 30/40 MS: 1 ChangeByte- 00:06:49.846 [2024-05-15 12:27:34.367087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.367112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.367186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.367200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.367256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.367269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.367323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.367336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.846 #40 NEW cov: 12051 ft: 14564 corp: 24/612b lim: 40 exec/s: 40 rss: 71Mb L: 35/40 MS: 1 CopyPart- 00:06:49.846 [2024-05-15 12:27:34.407054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.407079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.407153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.407168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.846 [2024-05-15 12:27:34.407222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3131ffff cdw11:ff3d3131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.846 [2024-05-15 12:27:34.407235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.846 #41 NEW cov: 12051 ft: 14599 corp: 25/642b lim: 40 exec/s: 41 rss: 71Mb L: 30/40 MS: 1 PersAutoDict- DE: "\377\377\377="- 00:06:49.846 [2024-05-15 12:27:34.447180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.847 [2024-05-15 12:27:34.447205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.847 [2024-05-15 12:27:34.447261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.847 [2024-05-15 12:27:34.447275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.847 [2024-05-15 12:27:34.447328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3162ffff cdw11:ff3d3131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.847 [2024-05-15 12:27:34.447341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.106 #42 NEW cov: 12051 ft: 14633 corp: 26/672b lim: 40 exec/s: 42 rss: 71Mb L: 30/40 MS: 1 PersAutoDict- DE: "\377\377\377="- 00:06:50.106 [2024-05-15 12:27:34.497495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:318a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.497520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.497593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.497607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.497663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:313131d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.497676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.497732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.497745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.106 #43 NEW cov: 12051 ft: 14670 corp: 27/710b lim: 40 exec/s: 43 rss: 71Mb L: 38/40 MS: 1 InsertByte- 00:06:50.106 [2024-05-15 12:27:34.547589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.547617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.547690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.547704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.547759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3131ffff cdw11:ffffff3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.547773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.547827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff3d3131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.547840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.106 #44 NEW cov: 12051 ft: 14725 corp: 28/744b lim: 40 exec/s: 44 rss: 71Mb L: 34/40 MS: 1 PersAutoDict- DE: "\377\377\377="- 00:06:50.106 [2024-05-15 12:27:34.597508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.597533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.597589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:7e313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.597604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.106 #45 NEW cov: 12051 ft: 14746 corp: 29/761b lim: 40 exec/s: 45 rss: 72Mb L: 17/40 MS: 1 InsertByte- 00:06:50.106 [2024-05-15 12:27:34.637802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03033131 cdw11:31318a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.637826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.637899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.637913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.637971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.637984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.106 [2024-05-15 12:27:34.638038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:d1313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.638051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.106 #46 NEW cov: 12051 ft: 14751 corp: 30/800b lim: 40 exec/s: 46 rss: 72Mb L: 39/40 MS: 1 CopyPart- 00:06:50.106 [2024-05-15 12:27:34.687665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.106 [2024-05-15 12:27:34.687690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.106 #47 NEW cov: 12051 ft: 14760 corp: 31/815b lim: 40 exec/s: 47 rss: 72Mb L: 15/40 MS: 1 ChangeByte- 00:06:50.365 [2024-05-15 12:27:34.737994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.738019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.365 [2024-05-15 12:27:34.738077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3131d5ce cdw11:cececece SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.738091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.365 [2024-05-15 12:27:34.738145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cece3131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.738158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.365 #48 NEW cov: 12051 ft: 14779 corp: 32/845b lim: 40 exec/s: 48 rss: 72Mb L: 30/40 MS: 1 ChangeASCIIInt- 00:06:50.365 [2024-05-15 12:27:34.778123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.778147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.365 [2024-05-15 12:27:34.778205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3131d5ce cdw11:cececece SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.778218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.365 [2024-05-15 12:27:34.778287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cece3131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.778301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.365 #49 NEW cov: 12051 ft: 14816 corp: 33/875b lim: 40 exec/s: 49 rss: 72Mb L: 30/40 MS: 1 ChangeByte- 00:06:50.365 [2024-05-15 12:27:34.828034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:40ff033d cdw11:310a3131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.828058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.365 #50 NEW cov: 12051 ft: 14827 corp: 34/886b lim: 40 exec/s: 50 rss: 72Mb L: 11/40 MS: 1 ChangeByte- 00:06:50.365 [2024-05-15 12:27:34.878384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:3131310a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.365 [2024-05-15 12:27:34.878424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.366 [2024-05-15 12:27:34.878485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3131d5ce cdw11:cececece SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.878498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.366 [2024-05-15 12:27:34.878552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cece3131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.878565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.366 #51 NEW cov: 12051 ft: 14864 corp: 35/916b lim: 40 exec/s: 51 rss: 72Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:50.366 [2024-05-15 12:27:34.918656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.918684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.366 [2024-05-15 12:27:34.918743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.918757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.366 [2024-05-15 12:27:34.918810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00001e31 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.918824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.366 [2024-05-15 12:27:34.918877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff3d cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.918890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.366 #52 NEW cov: 12051 ft: 14866 corp: 36/950b lim: 40 exec/s: 52 rss: 72Mb L: 34/40 MS: 1 PersAutoDict- DE: "\377\377\377="- 00:06:50.366 [2024-05-15 12:27:34.958401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03310a31 cdw11:31310331 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.366 [2024-05-15 12:27:34.958425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.366 #53 NEW cov: 12051 ft: 14879 corp: 37/961b lim: 40 exec/s: 53 rss: 72Mb L: 11/40 MS: 1 CrossOver- 00:06:50.625 [2024-05-15 12:27:34.998786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:34.998810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:34.998868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:34.998882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:34.998938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:312f3131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:34.998951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:35.028854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.028878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:35.028952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:2e313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.028966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:35.029023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31312f31 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.029036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.625 #55 NEW cov: 12051 ft: 14889 corp: 38/992b lim: 40 exec/s: 55 rss: 72Mb L: 31/40 MS: 2 ChangeByte-InsertByte- 00:06:50.625 [2024-05-15 12:27:35.069050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03313131 cdw11:308a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.069075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:35.069148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.069162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:35.069222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.069235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.625 [2024-05-15 12:27:35.069291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.625 [2024-05-15 12:27:35.069304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.625 #56 NEW cov: 12051 ft: 14894 corp: 39/1029b lim: 40 exec/s: 28 rss: 72Mb L: 37/40 MS: 1 ChangeASCIIInt- 00:06:50.625 #56 DONE cov: 12051 ft: 14894 corp: 39/1029b lim: 40 exec/s: 28 rss: 72Mb 00:06:50.625 ###### Recommended dictionary. ###### 00:06:50.625 "\377\377\377=" # Uses: 6 00:06:50.625 ###### End of recommended dictionary. ###### 00:06:50.625 Done 56 runs in 2 second(s) 00:06:50.625 [2024-05-15 12:27:35.090735] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:50.625 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.626 12:27:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:50.884 [2024-05-15 12:27:35.258150] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:50.884 [2024-05-15 12:27:35.258221] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406263 ] 00:06:50.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.884 [2024-05-15 12:27:35.444268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.141 [2024-05-15 12:27:35.513694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.141 [2024-05-15 12:27:35.573263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.141 [2024-05-15 12:27:35.589210] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:51.141 [2024-05-15 12:27:35.589648] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:51.141 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.141 INFO: Seed: 52603155 00:06:51.141 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:51.141 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:51.141 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:51.141 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.141 #2 INITED exec/s: 0 rss: 63Mb 00:06:51.141 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.141 This may also happen if the target rejected all inputs we tried so far 00:06:51.141 [2024-05-15 12:27:35.634964] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.141 [2024-05-15 12:27:35.634994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.141 [2024-05-15 12:27:35.635053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.141 [2024-05-15 12:27:35.635070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.403 NEW_FUNC[1/686]: 0x495df0 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:51.403 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.403 #3 NEW cov: 11801 ft: 11802 corp: 2/16b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:06:51.403 [2024-05-15 12:27:35.945869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.403 [2024-05-15 12:27:35.945904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.403 [2024-05-15 12:27:35.945981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.403 [2024-05-15 12:27:35.945994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.403 #4 NEW cov: 11938 ft: 12455 corp: 3/31b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:06:51.404 [2024-05-15 12:27:35.995869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.404 [2024-05-15 12:27:35.995897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.404 [2024-05-15 12:27:35.995975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.404 [2024-05-15 12:27:35.995993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.661 #5 NEW cov: 11944 ft: 12693 corp: 4/46b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:06:51.661 [2024-05-15 12:27:36.046000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.046025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.661 [2024-05-15 12:27:36.046086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.046100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.661 #6 NEW cov: 12029 ft: 13148 corp: 5/61b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ChangeBinInt- 00:06:51.661 [2024-05-15 12:27:36.086140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.086167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.661 [2024-05-15 12:27:36.086245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.086262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.661 #7 NEW cov: 12029 ft: 13204 corp: 6/76b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ShuffleBytes- 00:06:51.661 [2024-05-15 12:27:36.126072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.126099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.661 #10 NEW cov: 12029 ft: 13924 corp: 7/84b lim: 35 exec/s: 0 rss: 70Mb L: 8/15 MS: 3 InsertRepeatedBytes-ChangeBinInt-CrossOver- 00:06:51.661 [2024-05-15 12:27:36.166187] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.166213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.661 #11 NEW cov: 12029 ft: 13962 corp: 8/92b lim: 35 exec/s: 0 rss: 70Mb L: 8/15 MS: 1 ChangeBit- 00:06:51.661 [2024-05-15 12:27:36.216345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.216372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.661 #12 NEW cov: 12029 ft: 13990 corp: 9/100b lim: 35 exec/s: 0 rss: 70Mb L: 8/15 MS: 1 ShuffleBytes- 00:06:51.661 [2024-05-15 12:27:36.266782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.266809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.661 [2024-05-15 12:27:36.266871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.266885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.661 [2024-05-15 12:27:36.266947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.661 [2024-05-15 12:27:36.266960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.919 #13 NEW cov: 12029 ft: 14214 corp: 10/122b lim: 35 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 CrossOver- 00:06:51.919 [2024-05-15 12:27:36.316758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.316785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.919 [2024-05-15 12:27:36.316864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.316880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.919 #14 NEW cov: 12029 ft: 14244 corp: 11/139b lim: 35 exec/s: 0 rss: 70Mb L: 17/22 MS: 1 CrossOver- 00:06:51.919 [2024-05-15 12:27:36.356804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.356830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.919 #15 NEW cov: 12029 ft: 14340 corp: 12/147b lim: 35 exec/s: 0 rss: 70Mb L: 8/22 MS: 1 CMP- DE: "\037\000\000\000"- 00:06:51.919 [2024-05-15 12:27:36.396830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.396857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.919 #20 NEW cov: 12029 ft: 14391 corp: 13/156b lim: 35 exec/s: 0 rss: 70Mb L: 9/22 MS: 5 InsertByte-ChangeBinInt-ChangeBit-EraseBytes-CrossOver- 00:06:51.919 [2024-05-15 12:27:36.436981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.437008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.919 #21 NEW cov: 12029 ft: 14420 corp: 14/165b lim: 35 exec/s: 0 rss: 70Mb L: 9/22 MS: 1 ChangeBit- 00:06:51.919 [2024-05-15 12:27:36.487277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.487302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.919 [2024-05-15 12:27:36.487385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.919 [2024-05-15 12:27:36.487402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.919 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:51.920 #22 NEW cov: 12052 ft: 14440 corp: 15/180b lim: 35 exec/s: 0 rss: 71Mb L: 15/22 MS: 1 ShuffleBytes- 00:06:52.177 [2024-05-15 12:27:36.537243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.537269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.177 #23 NEW cov: 12052 ft: 14464 corp: 16/188b lim: 35 exec/s: 0 rss: 71Mb L: 8/22 MS: 1 ChangeBinInt- 00:06:52.177 [2024-05-15 12:27:36.587358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.587387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.177 #24 NEW cov: 12052 ft: 14469 corp: 17/197b lim: 35 exec/s: 0 rss: 71Mb L: 9/22 MS: 1 ChangeBinInt- 00:06:52.177 [2024-05-15 12:27:36.627702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.627729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.177 [2024-05-15 12:27:36.627811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.627825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.177 #25 NEW cov: 12052 ft: 14484 corp: 18/212b lim: 35 exec/s: 25 rss: 71Mb L: 15/22 MS: 1 ChangeByte- 00:06:52.177 [2024-05-15 12:27:36.667628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.667656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.177 #26 NEW cov: 12052 ft: 14507 corp: 19/225b lim: 35 exec/s: 26 rss: 71Mb L: 13/22 MS: 1 EraseBytes- 00:06:52.177 [2024-05-15 12:27:36.707751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.707777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.177 #27 NEW cov: 12052 ft: 14586 corp: 20/235b lim: 35 exec/s: 27 rss: 71Mb L: 10/22 MS: 1 InsertByte- 00:06:52.177 [2024-05-15 12:27:36.757910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:4 cdw10:8000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.177 [2024-05-15 12:27:36.757935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.177 #30 NEW cov: 12052 ft: 14600 corp: 21/248b lim: 35 exec/s: 30 rss: 71Mb L: 13/22 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:06:52.435 [2024-05-15 12:27:36.798221] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.798248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.435 [2024-05-15 12:27:36.798327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.798344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.435 #31 NEW cov: 12052 ft: 14607 corp: 22/263b lim: 35 exec/s: 31 rss: 71Mb L: 15/22 MS: 1 ShuffleBytes- 00:06:52.435 [2024-05-15 12:27:36.838147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.838172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.435 #32 NEW cov: 12052 ft: 14622 corp: 23/271b lim: 35 exec/s: 32 rss: 71Mb L: 8/22 MS: 1 PersAutoDict- DE: "\037\000\000\000"- 00:06:52.435 [2024-05-15 12:27:36.878405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.878430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.435 [2024-05-15 12:27:36.878509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.878523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.435 #33 NEW cov: 12052 ft: 14653 corp: 24/290b lim: 35 exec/s: 33 rss: 71Mb L: 19/22 MS: 1 CrossOver- 00:06:52.435 [2024-05-15 12:27:36.928604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.928632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.435 [2024-05-15 12:27:36.928696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.928712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.435 #34 NEW cov: 12052 ft: 14686 corp: 25/305b lim: 35 exec/s: 34 rss: 71Mb L: 15/22 MS: 1 ChangeBinInt- 00:06:52.435 [2024-05-15 12:27:36.978689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.978717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.435 [2024-05-15 12:27:36.978778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:36.978795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.435 #35 NEW cov: 12052 ft: 14722 corp: 26/320b lim: 35 exec/s: 35 rss: 71Mb L: 15/22 MS: 1 ShuffleBytes- 00:06:52.435 [2024-05-15 12:27:37.018593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.435 [2024-05-15 12:27:37.018621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.435 #36 NEW cov: 12052 ft: 14732 corp: 27/328b lim: 35 exec/s: 36 rss: 71Mb L: 8/22 MS: 1 EraseBytes- 00:06:52.693 [2024-05-15 12:27:37.068727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.693 [2024-05-15 12:27:37.068752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.693 #37 NEW cov: 12052 ft: 14749 corp: 28/341b lim: 35 exec/s: 37 rss: 71Mb L: 13/22 MS: 1 CMP- DE: "3\001\000\000"- 00:06:52.693 [2024-05-15 12:27:37.108856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.108883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.694 #38 NEW cov: 12052 ft: 14765 corp: 29/350b lim: 35 exec/s: 38 rss: 71Mb L: 9/22 MS: 1 InsertByte- 00:06:52.694 [2024-05-15 12:27:37.149140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.149168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.694 [2024-05-15 12:27:37.149245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.149262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.694 #39 NEW cov: 12052 ft: 14766 corp: 30/365b lim: 35 exec/s: 39 rss: 71Mb L: 15/22 MS: 1 CopyPart- 00:06:52.694 [2024-05-15 12:27:37.189256] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.189283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.694 [2024-05-15 12:27:37.189346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.189362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.694 #40 NEW cov: 12052 ft: 14779 corp: 31/384b lim: 35 exec/s: 40 rss: 72Mb L: 19/22 MS: 1 CopyPart- 00:06:52.694 [2024-05-15 12:27:37.239243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.239268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.694 #41 NEW cov: 12052 ft: 14794 corp: 32/393b lim: 35 exec/s: 41 rss: 72Mb L: 9/22 MS: 1 ShuffleBytes- 00:06:52.694 [2024-05-15 12:27:37.279876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.279903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.694 [2024-05-15 12:27:37.279982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.279998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.694 [2024-05-15 12:27:37.280060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.280077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.694 [2024-05-15 12:27:37.280137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.694 [2024-05-15 12:27:37.280151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.694 #42 NEW cov: 12052 ft: 15081 corp: 33/424b lim: 35 exec/s: 42 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:52.952 [2024-05-15 12:27:37.319660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.319687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.952 [2024-05-15 12:27:37.319751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.319764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.952 #43 NEW cov: 12052 ft: 15082 corp: 34/439b lim: 35 exec/s: 43 rss: 72Mb L: 15/31 MS: 1 ChangeByte- 00:06:52.952 [2024-05-15 12:27:37.359784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.359810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.952 [2024-05-15 12:27:37.359890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.359904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.952 #44 NEW cov: 12052 ft: 15102 corp: 35/454b lim: 35 exec/s: 44 rss: 72Mb L: 15/31 MS: 1 ShuffleBytes- 00:06:52.952 [2024-05-15 12:27:37.399881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.399907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.952 [2024-05-15 12:27:37.399983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.399998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.952 #45 NEW cov: 12052 ft: 15121 corp: 36/468b lim: 35 exec/s: 45 rss: 72Mb L: 14/31 MS: 1 InsertByte- 00:06:52.952 [2024-05-15 12:27:37.449897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.449924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.952 #46 NEW cov: 12052 ft: 15150 corp: 37/478b lim: 35 exec/s: 46 rss: 72Mb L: 10/31 MS: 1 ChangeBit- 00:06:52.952 [2024-05-15 12:27:37.500025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.500050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.952 #47 NEW cov: 12052 ft: 15154 corp: 38/487b lim: 35 exec/s: 47 rss: 72Mb L: 9/31 MS: 1 ChangeBit- 00:06:52.952 [2024-05-15 12:27:37.540137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.952 [2024-05-15 12:27:37.540164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.952 #48 NEW cov: 12052 ft: 15207 corp: 39/496b lim: 35 exec/s: 48 rss: 72Mb L: 9/31 MS: 1 ChangeBinInt- 00:06:53.211 [2024-05-15 12:27:37.580413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.211 [2024-05-15 12:27:37.580438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.211 [2024-05-15 12:27:37.580518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.211 [2024-05-15 12:27:37.580536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.211 #49 NEW cov: 12052 ft: 15215 corp: 40/511b lim: 35 exec/s: 49 rss: 72Mb L: 15/31 MS: 1 ChangeBit- 00:06:53.211 [2024-05-15 12:27:37.620520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.211 [2024-05-15 12:27:37.620547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.211 [2024-05-15 12:27:37.620627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.211 [2024-05-15 12:27:37.620642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.211 #55 NEW cov: 12052 ft: 15255 corp: 41/526b lim: 35 exec/s: 27 rss: 72Mb L: 15/31 MS: 1 ChangeBinInt- 00:06:53.211 #55 DONE cov: 12052 ft: 15255 corp: 41/526b lim: 35 exec/s: 27 rss: 72Mb 00:06:53.211 ###### Recommended dictionary. ###### 00:06:53.211 "\000\000\000\000\000\000\000\001" # Uses: 1 00:06:53.211 "\037\000\000\000" # Uses: 1 00:06:53.211 "3\001\000\000" # Uses: 0 00:06:53.211 ###### End of recommended dictionary. ###### 00:06:53.211 Done 55 runs in 2 second(s) 00:06:53.211 [2024-05-15 12:27:37.639757] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.211 12:27:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:53.211 [2024-05-15 12:27:37.808604] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:53.211 [2024-05-15 12:27:37.808674] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406764 ] 00:06:53.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.469 [2024-05-15 12:27:37.990279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.469 [2024-05-15 12:27:38.055634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.727 [2024-05-15 12:27:38.115773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.727 [2024-05-15 12:27:38.131725] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:53.727 [2024-05-15 12:27:38.132126] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:53.727 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.727 INFO: Seed: 2594612289 00:06:53.727 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:53.727 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:53.727 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:53.727 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.727 #2 INITED exec/s: 0 rss: 63Mb 00:06:53.727 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.727 This may also happen if the target rejected all inputs we tried so far 00:06:53.727 [2024-05-15 12:27:38.177591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.727 [2024-05-15 12:27:38.177620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.727 [2024-05-15 12:27:38.177680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.727 [2024-05-15 12:27:38.177694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.986 NEW_FUNC[1/686]: 0x497330 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:53.986 NEW_FUNC[2/686]: 0x4b72b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:53.986 #15 NEW cov: 11803 ft: 11804 corp: 2/24b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:06:53.986 [2024-05-15 12:27:38.488388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.986 [2024-05-15 12:27:38.488424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.986 [2024-05-15 12:27:38.488486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.986 [2024-05-15 12:27:38.488502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.986 #16 NEW cov: 11933 ft: 12425 corp: 3/47b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeByte- 00:06:53.986 [2024-05-15 12:27:38.538400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.986 [2024-05-15 12:27:38.538426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.986 [2024-05-15 12:27:38.538483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.986 [2024-05-15 12:27:38.538497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.986 #17 NEW cov: 11939 ft: 12721 corp: 4/70b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ShuffleBytes- 00:06:53.986 [2024-05-15 12:27:38.578230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.986 [2024-05-15 12:27:38.578255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.986 #18 NEW cov: 12024 ft: 13271 corp: 5/81b lim: 35 exec/s: 0 rss: 70Mb L: 11/23 MS: 1 CrossOver- 00:06:54.243 [2024-05-15 12:27:38.618673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.243 [2024-05-15 12:27:38.618698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.243 [2024-05-15 12:27:38.618756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.243 [2024-05-15 12:27:38.618770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.243 #19 NEW cov: 12024 ft: 13346 corp: 6/104b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 CopyPart- 00:06:54.243 [2024-05-15 12:27:38.668773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.243 [2024-05-15 12:27:38.668798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.243 [2024-05-15 12:27:38.668872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.243 [2024-05-15 12:27:38.668885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.243 #20 NEW cov: 12024 ft: 13471 corp: 7/127b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeBinInt- 00:06:54.243 [2024-05-15 12:27:38.708904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.243 [2024-05-15 12:27:38.708930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.243 [2024-05-15 12:27:38.708986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.709001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.244 #21 NEW cov: 12024 ft: 13624 corp: 8/150b lim: 35 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeBit- 00:06:54.244 [2024-05-15 12:27:38.759047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.759072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.244 [2024-05-15 12:27:38.759145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.759159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.244 #22 NEW cov: 12024 ft: 13667 corp: 9/174b lim: 35 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 InsertByte- 00:06:54.244 [2024-05-15 12:27:38.809122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000078a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.809147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.244 [2024-05-15 12:27:38.809205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.809219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.244 [2024-05-15 12:27:38.809279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.809293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.244 #23 NEW cov: 12024 ft: 13817 corp: 10/197b lim: 35 exec/s: 0 rss: 70Mb L: 23/24 MS: 1 ChangeBit- 00:06:54.244 [2024-05-15 12:27:38.849242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000018a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.849267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.244 [2024-05-15 12:27:38.849325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.849338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.244 [2024-05-15 12:27:38.849401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.244 [2024-05-15 12:27:38.849414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.502 #24 NEW cov: 12024 ft: 13853 corp: 11/220b lim: 35 exec/s: 0 rss: 70Mb L: 23/24 MS: 1 ChangeByte- 00:06:54.502 [2024-05-15 12:27:38.899444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.899469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:38.899525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.899539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.502 #25 NEW cov: 12024 ft: 13877 corp: 12/243b lim: 35 exec/s: 0 rss: 70Mb L: 23/24 MS: 1 CMP- DE: "\001\000\000\000\002/\253#"- 00:06:54.502 [2024-05-15 12:27:38.939593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.939617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:38.939690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.939707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:38.939764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.939776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:38.939834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.939848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.502 #26 NEW cov: 12024 ft: 14323 corp: 13/272b lim: 35 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:54.502 [2024-05-15 12:27:38.989680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.989706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:38.989782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:38.989796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.502 #27 NEW cov: 12024 ft: 14348 corp: 14/295b lim: 35 exec/s: 0 rss: 70Mb L: 23/29 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:54.502 [2024-05-15 12:27:39.029756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:39.029780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:39.029852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000723 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:39.029865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.502 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:54.502 #28 NEW cov: 12047 ft: 14406 corp: 15/319b lim: 35 exec/s: 0 rss: 70Mb L: 24/29 MS: 1 InsertByte- 00:06:54.502 [2024-05-15 12:27:39.079993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:39.080018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:39.080072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:39.080085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:39.080141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:39.080154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.502 [2024-05-15 12:27:39.080211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.502 [2024-05-15 12:27:39.080224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.502 #29 NEW cov: 12047 ft: 14438 corp: 16/348b lim: 35 exec/s: 0 rss: 71Mb L: 29/29 MS: 1 ShuffleBytes- 00:06:54.760 [2024-05-15 12:27:39.129759] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007d5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.129786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.760 #37 NEW cov: 12047 ft: 14458 corp: 17/359b lim: 35 exec/s: 0 rss: 71Mb L: 11/29 MS: 3 InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:06:54.760 [2024-05-15 12:27:39.170177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.170202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.170275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.170289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.760 #38 NEW cov: 12047 ft: 14476 corp: 18/382b lim: 35 exec/s: 38 rss: 71Mb L: 23/29 MS: 1 ChangeByte- 00:06:54.760 [2024-05-15 12:27:39.210280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.210305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.210361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001ab SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.210375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.760 #39 NEW cov: 12047 ft: 14488 corp: 19/406b lim: 35 exec/s: 39 rss: 71Mb L: 24/29 MS: 1 PersAutoDict- DE: "\001\000\000\000\002/\253#"- 00:06:54.760 [2024-05-15 12:27:39.260387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000078a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.260412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.260471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.260485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.260542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.260556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.760 #40 NEW cov: 12047 ft: 14498 corp: 20/429b lim: 35 exec/s: 40 rss: 71Mb L: 23/29 MS: 1 PersAutoDict- DE: "\001\000\000\000\002/\253#"- 00:06:54.760 [2024-05-15 12:27:39.300567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.300591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.300664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.300678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.760 #41 NEW cov: 12047 ft: 14503 corp: 21/452b lim: 35 exec/s: 41 rss: 71Mb L: 23/29 MS: 1 ChangeBinInt- 00:06:54.760 [2024-05-15 12:27:39.340744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.340768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.340846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.340860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.340917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.340930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.760 [2024-05-15 12:27:39.340986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.760 [2024-05-15 12:27:39.340999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.760 #42 NEW cov: 12047 ft: 14541 corp: 22/486b lim: 35 exec/s: 42 rss: 71Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:55.018 [2024-05-15 12:27:39.380804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.380829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.380886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.380900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.018 #43 NEW cov: 12047 ft: 14560 corp: 23/509b lim: 35 exec/s: 43 rss: 71Mb L: 23/34 MS: 1 ChangeByte- 00:06:55.018 #44 NEW cov: 12047 ft: 14628 corp: 24/521b lim: 35 exec/s: 44 rss: 71Mb L: 12/34 MS: 1 EraseBytes- 00:06:55.018 [2024-05-15 12:27:39.471235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.471259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.471315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.471329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.471388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.471402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.471476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.471489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.471546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.471566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.018 #45 NEW cov: 12047 ft: 14711 corp: 25/556b lim: 35 exec/s: 45 rss: 71Mb L: 35/35 MS: 1 InsertByte- 00:06:55.018 #46 NEW cov: 12047 ft: 14728 corp: 26/568b lim: 35 exec/s: 46 rss: 71Mb L: 12/35 MS: 1 EraseBytes- 00:06:55.018 [2024-05-15 12:27:39.571284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000078a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.571309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.571384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.571402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.018 NEW_FUNC[1/1]: 0x4b0780 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:06:55.018 #47 NEW cov: 12085 ft: 14797 corp: 27/594b lim: 35 exec/s: 47 rss: 71Mb L: 26/35 MS: 1 CopyPart- 00:06:55.018 [2024-05-15 12:27:39.621451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.621475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.018 [2024-05-15 12:27:39.621534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.018 [2024-05-15 12:27:39.621548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.276 #48 NEW cov: 12085 ft: 14814 corp: 28/617b lim: 35 exec/s: 48 rss: 71Mb L: 23/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:55.276 [2024-05-15 12:27:39.671289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.671314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.276 #50 NEW cov: 12085 ft: 14824 corp: 29/626b lim: 35 exec/s: 50 rss: 71Mb L: 9/35 MS: 2 ChangeByte-PersAutoDict- DE: "\001\000\000\000\002/\253#"- 00:06:55.276 [2024-05-15 12:27:39.711663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000078a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.711688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.276 [2024-05-15 12:27:39.711763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000723 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.711777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.276 #51 NEW cov: 12085 ft: 14854 corp: 30/652b lim: 35 exec/s: 51 rss: 72Mb L: 26/35 MS: 1 ShuffleBytes- 00:06:55.276 [2024-05-15 12:27:39.761818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000078a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.761844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.276 [2024-05-15 12:27:39.761915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.761928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.276 [2024-05-15 12:27:39.761984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.761998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.276 #52 NEW cov: 12085 ft: 14887 corp: 31/678b lim: 35 exec/s: 52 rss: 72Mb L: 26/35 MS: 1 ShuffleBytes- 00:06:55.276 [2024-05-15 12:27:39.802004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.802030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.276 [2024-05-15 12:27:39.802105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.802123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.276 [2024-05-15 12:27:39.802180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.802194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.276 [2024-05-15 12:27:39.802252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.802265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.276 #53 NEW cov: 12085 ft: 14892 corp: 32/707b lim: 35 exec/s: 53 rss: 72Mb L: 29/35 MS: 1 CopyPart- 00:06:55.276 [2024-05-15 12:27:39.851930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.276 [2024-05-15 12:27:39.851954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.276 #54 NEW cov: 12085 ft: 15042 corp: 33/727b lim: 35 exec/s: 54 rss: 72Mb L: 20/35 MS: 1 CMP- DE: "\000\206\007^J[\225\304"- 00:06:55.533 [2024-05-15 12:27:39.902176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000078a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:39.902201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.533 [2024-05-15 12:27:39.902261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:39.902275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.533 [2024-05-15 12:27:39.902329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:39.902342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.533 #55 NEW cov: 12085 ft: 15073 corp: 34/754b lim: 35 exec/s: 55 rss: 72Mb L: 27/35 MS: 1 CopyPart- 00:06:55.533 #56 NEW cov: 12085 ft: 15082 corp: 35/766b lim: 35 exec/s: 56 rss: 72Mb L: 12/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\002/\253#"- 00:06:55.533 [2024-05-15 12:27:39.992171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:39.992196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.533 #57 NEW cov: 12085 ft: 15105 corp: 36/778b lim: 35 exec/s: 57 rss: 72Mb L: 12/35 MS: 1 InsertByte- 00:06:55.533 [2024-05-15 12:27:40.032693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:40.032720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.533 [2024-05-15 12:27:40.032777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:40.032791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.533 [2024-05-15 12:27:40.032846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:40.032859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.533 [2024-05-15 12:27:40.032912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:40.032929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.533 #58 NEW cov: 12085 ft: 15116 corp: 37/807b lim: 35 exec/s: 58 rss: 72Mb L: 29/35 MS: 1 ChangeBinInt- 00:06:55.533 [2024-05-15 12:27:40.072741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.533 [2024-05-15 12:27:40.072768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.533 [2024-05-15 12:27:40.072829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.534 [2024-05-15 12:27:40.072843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.534 #59 NEW cov: 12085 ft: 15197 corp: 38/831b lim: 35 exec/s: 59 rss: 72Mb L: 24/35 MS: 1 InsertByte- 00:06:55.534 [2024-05-15 12:27:40.112803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.534 [2024-05-15 12:27:40.112831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.534 [2024-05-15 12:27:40.112888] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.534 [2024-05-15 12:27:40.112902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.534 [2024-05-15 12:27:40.112957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.534 [2024-05-15 12:27:40.112971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.534 #60 NEW cov: 12085 ft: 15202 corp: 39/855b lim: 35 exec/s: 60 rss: 72Mb L: 24/35 MS: 1 CrossOver- 00:06:55.792 [2024-05-15 12:27:40.152771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.792 [2024-05-15 12:27:40.152796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.792 [2024-05-15 12:27:40.152856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.792 [2024-05-15 12:27:40.152869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.792 #61 NEW cov: 12085 ft: 15211 corp: 40/875b lim: 35 exec/s: 30 rss: 72Mb L: 20/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\002/\253#"- 00:06:55.792 #61 DONE cov: 12085 ft: 15211 corp: 40/875b lim: 35 exec/s: 30 rss: 72Mb 00:06:55.792 ###### Recommended dictionary. ###### 00:06:55.792 "\001\000\000\000\002/\253#" # Uses: 5 00:06:55.792 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:55.792 "\000\206\007^J[\225\304" # Uses: 0 00:06:55.792 ###### End of recommended dictionary. ###### 00:06:55.792 Done 61 runs in 2 second(s) 00:06:55.792 [2024-05-15 12:27:40.183622] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.792 12:27:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:55.792 [2024-05-15 12:27:40.353339] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:55.792 [2024-05-15 12:27:40.353427] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407299 ] 00:06:55.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.050 [2024-05-15 12:27:40.534921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.050 [2024-05-15 12:27:40.602701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.050 [2024-05-15 12:27:40.662862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.308 [2024-05-15 12:27:40.678808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:56.308 [2024-05-15 12:27:40.679222] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:56.308 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.308 INFO: Seed: 847636551 00:06:56.308 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:56.308 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:56.308 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:56.308 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.308 #2 INITED exec/s: 0 rss: 63Mb 00:06:56.308 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.308 This may also happen if the target rejected all inputs we tried so far 00:06:56.308 [2024-05-15 12:27:40.734413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.308 [2024-05-15 12:27:40.734445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.308 [2024-05-15 12:27:40.734491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.308 [2024-05-15 12:27:40.734507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 NEW_FUNC[1/685]: 0x4987e0 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:56.566 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.566 #9 NEW cov: 11889 ft: 11890 corp: 2/58b lim: 105 exec/s: 0 rss: 70Mb L: 57/57 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:56.566 [2024-05-15 12:27:41.065515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.065554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.065611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.065629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.065689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.065707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.065767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.065785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.566 NEW_FUNC[1/1]: 0x1759f90 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:637 00:06:56.566 #14 NEW cov: 12023 ft: 13016 corp: 3/161b lim: 105 exec/s: 0 rss: 70Mb L: 103/103 MS: 5 CrossOver-CrossOver-CrossOver-CopyPart-InsertRepeatedBytes- 00:06:56.566 [2024-05-15 12:27:41.105544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.105572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.105644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.105660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.105717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.105733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.105788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.105804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.566 #18 NEW cov: 12029 ft: 13400 corp: 4/249b lim: 105 exec/s: 0 rss: 70Mb L: 88/103 MS: 4 ChangeBit-InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:56.566 [2024-05-15 12:27:41.145654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.145681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.145747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.145763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.145819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551574 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.145838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.566 [2024-05-15 12:27:41.145897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.566 [2024-05-15 12:27:41.145913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.566 #19 NEW cov: 12114 ft: 13635 corp: 5/353b lim: 105 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 InsertByte- 00:06:56.825 [2024-05-15 12:27:41.195833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.195861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.195929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.195945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.195998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.196013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.196071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.196088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.825 #20 NEW cov: 12114 ft: 13723 corp: 6/445b lim: 105 exec/s: 0 rss: 70Mb L: 92/104 MS: 1 CrossOver- 00:06:56.825 [2024-05-15 12:27:41.245668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.245694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.245745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.245760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.825 #21 NEW cov: 12114 ft: 13793 corp: 7/502b lim: 105 exec/s: 0 rss: 70Mb L: 57/104 MS: 1 CopyPart- 00:06:56.825 [2024-05-15 12:27:41.286036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.286062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.286116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.286133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.286188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.286203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.286262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.286278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.825 #22 NEW cov: 12114 ft: 13888 corp: 8/606b lim: 105 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 InsertByte- 00:06:56.825 [2024-05-15 12:27:41.326153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.326181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.326230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.326247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.326304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.326319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.326383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.326399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.825 #23 NEW cov: 12114 ft: 13934 corp: 9/710b lim: 105 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 ShuffleBytes- 00:06:56.825 [2024-05-15 12:27:41.376070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.376098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.376131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.376146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.825 #29 NEW cov: 12114 ft: 13973 corp: 10/767b lim: 105 exec/s: 0 rss: 70Mb L: 57/104 MS: 1 CopyPart- 00:06:56.825 [2024-05-15 12:27:41.426189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.426216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.825 [2024-05-15 12:27:41.426267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.825 [2024-05-15 12:27:41.426283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.083 #30 NEW cov: 12114 ft: 14042 corp: 11/825b lim: 105 exec/s: 0 rss: 70Mb L: 58/104 MS: 1 InsertByte- 00:06:57.083 [2024-05-15 12:27:41.466583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.466610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.083 [2024-05-15 12:27:41.466661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.466679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.083 [2024-05-15 12:27:41.466735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.466750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.083 [2024-05-15 12:27:41.466808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:39680 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.466823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.083 #31 NEW cov: 12114 ft: 14074 corp: 12/914b lim: 105 exec/s: 0 rss: 70Mb L: 89/104 MS: 1 InsertByte- 00:06:57.083 [2024-05-15 12:27:41.516709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.516737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.083 [2024-05-15 12:27:41.516789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.516804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.083 [2024-05-15 12:27:41.516860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.516876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.083 [2024-05-15 12:27:41.516930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.083 [2024-05-15 12:27:41.516946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.084 #32 NEW cov: 12114 ft: 14091 corp: 13/1018b lim: 105 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 ChangeByte- 00:06:57.084 [2024-05-15 12:27:41.556808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.556835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.556907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.556922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.556980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.556995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.557050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.557066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.084 #33 NEW cov: 12114 ft: 14118 corp: 14/1116b lim: 105 exec/s: 0 rss: 70Mb L: 98/104 MS: 1 InsertRepeatedBytes- 00:06:57.084 [2024-05-15 12:27:41.596679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.596709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.596758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198712558280928 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.596773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.084 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:57.084 #34 NEW cov: 12137 ft: 14146 corp: 15/1173b lim: 105 exec/s: 0 rss: 70Mb L: 57/104 MS: 1 ChangeBinInt- 00:06:57.084 [2024-05-15 12:27:41.647116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.647144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.647195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.647210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.647267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.647283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.647340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.647356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.084 #35 NEW cov: 12137 ft: 14165 corp: 16/1272b lim: 105 exec/s: 0 rss: 70Mb L: 99/104 MS: 1 InsertRepeatedBytes- 00:06:57.084 [2024-05-15 12:27:41.686942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.686969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.084 [2024-05-15 12:27:41.687000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.084 [2024-05-15 12:27:41.687031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 #36 NEW cov: 12137 ft: 14177 corp: 17/1330b lim: 105 exec/s: 0 rss: 70Mb L: 58/104 MS: 1 InsertByte- 00:06:57.343 [2024-05-15 12:27:41.727327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.727355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.727432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.727449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.727515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.727528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.727589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.727605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.343 #37 NEW cov: 12137 ft: 14205 corp: 18/1434b lim: 105 exec/s: 37 rss: 70Mb L: 104/104 MS: 1 ShuffleBytes- 00:06:57.343 [2024-05-15 12:27:41.767561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.767589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.767665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.767682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.767737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.767753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.767807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.767823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.767877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.767893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.343 #38 NEW cov: 12137 ft: 14271 corp: 19/1539b lim: 105 exec/s: 38 rss: 70Mb L: 105/105 MS: 1 CopyPart- 00:06:57.343 [2024-05-15 12:27:41.817598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.817625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.817675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.817688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.817746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.817761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.817816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.817832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.343 #39 NEW cov: 12137 ft: 14296 corp: 20/1643b lim: 105 exec/s: 39 rss: 70Mb L: 104/105 MS: 1 ShuffleBytes- 00:06:57.343 [2024-05-15 12:27:41.857747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.857775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.857834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446739675663040511 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.857851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.857906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.857923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.857979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.857995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.343 #40 NEW cov: 12137 ft: 14347 corp: 21/1747b lim: 105 exec/s: 40 rss: 71Mb L: 104/105 MS: 1 ChangeBinInt- 00:06:57.343 [2024-05-15 12:27:41.907632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.907659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.907694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446739675663040511 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.907710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 #41 NEW cov: 12137 ft: 14388 corp: 22/1808b lim: 105 exec/s: 41 rss: 71Mb L: 61/105 MS: 1 EraseBytes- 00:06:57.343 [2024-05-15 12:27:41.958088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.958114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.958171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.958184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.958241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.958257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.958313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.343 [2024-05-15 12:27:41.958329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.343 [2024-05-15 12:27:41.958390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65372 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.344 [2024-05-15 12:27:41.958406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.602 #42 NEW cov: 12137 ft: 14471 corp: 23/1913b lim: 105 exec/s: 42 rss: 71Mb L: 105/105 MS: 1 InsertByte- 00:06:57.602 [2024-05-15 12:27:41.997964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.602 [2024-05-15 12:27:41.997996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.602 [2024-05-15 12:27:41.998032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.602 [2024-05-15 12:27:41.998047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.602 [2024-05-15 12:27:41.998103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.602 [2024-05-15 12:27:41.998120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.603 #43 NEW cov: 12137 ft: 14757 corp: 24/1988b lim: 105 exec/s: 43 rss: 71Mb L: 75/105 MS: 1 EraseBytes- 00:06:57.603 [2024-05-15 12:27:42.047966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.047995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.048026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198712558280928 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.048042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.603 #44 NEW cov: 12137 ft: 14837 corp: 25/2045b lim: 105 exec/s: 44 rss: 71Mb L: 57/105 MS: 1 CopyPart- 00:06:57.603 [2024-05-15 12:27:42.098352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.098385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.098443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.098458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.098513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.098528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.098584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.098601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.603 #45 NEW cov: 12137 ft: 14944 corp: 26/2149b lim: 105 exec/s: 45 rss: 71Mb L: 104/105 MS: 1 ShuffleBytes- 00:06:57.603 [2024-05-15 12:27:42.138212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.138240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.138290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:14849 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.138306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.603 #46 NEW cov: 12137 ft: 15040 corp: 27/2207b lim: 105 exec/s: 46 rss: 71Mb L: 58/105 MS: 1 ChangeBinInt- 00:06:57.603 [2024-05-15 12:27:42.188717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.188744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.188803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.188819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.188876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.188891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.188946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.188960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.603 [2024-05-15 12:27:42.189017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.603 [2024-05-15 12:27:42.189033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.861 #47 NEW cov: 12137 ft: 15133 corp: 28/2312b lim: 105 exec/s: 47 rss: 72Mb L: 105/105 MS: 1 CrossOver- 00:06:57.861 [2024-05-15 12:27:42.238711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583014655 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.238738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.238790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446739675663040511 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.238806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.238863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.238879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.238935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.238951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.861 #48 NEW cov: 12137 ft: 15155 corp: 29/2416b lim: 105 exec/s: 48 rss: 72Mb L: 104/105 MS: 1 CopyPart- 00:06:57.861 [2024-05-15 12:27:42.278843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.278870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.278927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.278942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.278997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.279014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.279074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.279090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.861 #49 NEW cov: 12137 ft: 15174 corp: 30/2514b lim: 105 exec/s: 49 rss: 72Mb L: 98/105 MS: 1 ChangeBinInt- 00:06:57.861 [2024-05-15 12:27:42.328906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.328934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.329005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.329019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.329079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.329095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.329151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.329168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.861 #50 NEW cov: 12137 ft: 15194 corp: 31/2612b lim: 105 exec/s: 50 rss: 72Mb L: 98/105 MS: 1 ChangeBinInt- 00:06:57.861 [2024-05-15 12:27:42.379091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65342 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.379118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.379168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.379182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.379240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.379256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.379312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.379328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.861 #51 NEW cov: 12137 ft: 15204 corp: 32/2700b lim: 105 exec/s: 51 rss: 72Mb L: 88/105 MS: 1 ChangeByte- 00:06:57.861 [2024-05-15 12:27:42.419324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.419351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.419424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.419441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.419498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.419514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.419571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.419587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.419644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.419660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.861 #52 NEW cov: 12137 ft: 15207 corp: 33/2805b lim: 105 exec/s: 52 rss: 72Mb L: 105/105 MS: 1 CopyPart- 00:06:57.861 [2024-05-15 12:27:42.469357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.469389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.861 [2024-05-15 12:27:42.469450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.861 [2024-05-15 12:27:42.469466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.862 [2024-05-15 12:27:42.469521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.862 [2024-05-15 12:27:42.469535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.862 [2024-05-15 12:27:42.469592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.862 [2024-05-15 12:27:42.469609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.120 #53 NEW cov: 12137 ft: 15220 corp: 34/2903b lim: 105 exec/s: 53 rss: 72Mb L: 98/105 MS: 1 ChangeBit- 00:06:58.120 [2024-05-15 12:27:42.509464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.509491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.509558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.509574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.509630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.509645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.509703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.509722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.120 #54 NEW cov: 12137 ft: 15232 corp: 35/3002b lim: 105 exec/s: 54 rss: 72Mb L: 99/105 MS: 1 InsertByte- 00:06:58.120 [2024-05-15 12:27:42.549566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.549593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.549660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.549677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.549732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709543679 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.549747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.549803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.549819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.120 #55 NEW cov: 12137 ft: 15259 corp: 36/3094b lim: 105 exec/s: 55 rss: 72Mb L: 92/105 MS: 1 ShuffleBytes- 00:06:58.120 [2024-05-15 12:27:42.599492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16204198715192303840 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.599519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.599551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.599565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.120 #56 NEW cov: 12137 ft: 15285 corp: 37/3151b lim: 105 exec/s: 56 rss: 72Mb L: 57/105 MS: 1 ChangeBit- 00:06:58.120 [2024-05-15 12:27:42.639828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.639854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.639922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.639938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.639993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551405 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.640009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.640066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.640083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.120 #57 NEW cov: 12137 ft: 15293 corp: 38/3250b lim: 105 exec/s: 57 rss: 72Mb L: 99/105 MS: 1 InsertByte- 00:06:58.120 [2024-05-15 12:27:42.689974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.690001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.690070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.690087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.690144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.690160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.690221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.690237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.120 #58 NEW cov: 12137 ft: 15314 corp: 39/3348b lim: 105 exec/s: 58 rss: 72Mb L: 98/105 MS: 1 ChangeBit- 00:06:58.120 [2024-05-15 12:27:42.730125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.730153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.730221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.730237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.730295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.730311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.120 [2024-05-15 12:27:42.730368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4294967295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.120 [2024-05-15 12:27:42.730387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.379 #59 NEW cov: 12137 ft: 15317 corp: 40/3450b lim: 105 exec/s: 29 rss: 72Mb L: 102/105 MS: 1 CrossOver- 00:06:58.379 #59 DONE cov: 12137 ft: 15317 corp: 40/3450b lim: 105 exec/s: 29 rss: 72Mb 00:06:58.379 Done 59 runs in 2 second(s) 00:06:58.379 [2024-05-15 12:27:42.762103] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.379 12:27:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:58.379 [2024-05-15 12:27:42.928991] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:06:58.379 [2024-05-15 12:27:42.929065] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407600 ] 00:06:58.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.637 [2024-05-15 12:27:43.109720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.637 [2024-05-15 12:27:43.177600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.637 [2024-05-15 12:27:43.237719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.637 [2024-05-15 12:27:43.253663] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:58.637 [2024-05-15 12:27:43.254094] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:58.894 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.894 INFO: Seed: 3420638009 00:06:58.894 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:06:58.894 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:06:58.894 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:58.894 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.894 #2 INITED exec/s: 0 rss: 63Mb 00:06:58.894 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.894 This may also happen if the target rejected all inputs we tried so far 00:06:58.894 [2024-05-15 12:27:43.323346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12587190071073877678 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.894 [2024-05-15 12:27:43.323387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.151 NEW_FUNC[1/687]: 0x49bb60 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:59.151 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.151 #14 NEW cov: 11914 ft: 11913 corp: 2/44b lim: 120 exec/s: 0 rss: 70Mb L: 43/43 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:59.151 [2024-05-15 12:27:43.654534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12587190071073877678 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.151 [2024-05-15 12:27:43.654584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.151 #15 NEW cov: 12044 ft: 12666 corp: 3/81b lim: 120 exec/s: 0 rss: 70Mb L: 37/43 MS: 1 EraseBytes- 00:06:59.151 [2024-05-15 12:27:43.714526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726335150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.151 [2024-05-15 12:27:43.714558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.151 #16 NEW cov: 12050 ft: 12930 corp: 4/126b lim: 120 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:59.408 [2024-05-15 12:27:43.775499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3110627431550757675 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.408 [2024-05-15 12:27:43.775533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.408 [2024-05-15 12:27:43.775586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.408 [2024-05-15 12:27:43.775610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.408 [2024-05-15 12:27:43.775750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.408 [2024-05-15 12:27:43.775772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.408 [2024-05-15 12:27:43.775911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.408 [2024-05-15 12:27:43.775936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.408 #24 NEW cov: 12135 ft: 14074 corp: 5/245b lim: 120 exec/s: 0 rss: 70Mb L: 119/119 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:59.408 [2024-05-15 12:27:43.825196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.408 [2024-05-15 12:27:43.825231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.409 [2024-05-15 12:27:43.825338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.825362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.409 #25 NEW cov: 12135 ft: 14553 corp: 6/295b lim: 120 exec/s: 0 rss: 70Mb L: 50/119 MS: 1 InsertRepeatedBytes- 00:06:59.409 [2024-05-15 12:27:43.875580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.875612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.409 [2024-05-15 12:27:43.875701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.875725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.409 [2024-05-15 12:27:43.875859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.875883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.409 #26 NEW cov: 12135 ft: 14923 corp: 7/386b lim: 120 exec/s: 0 rss: 70Mb L: 91/119 MS: 1 CopyPart- 00:06:59.409 [2024-05-15 12:27:43.935210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.935235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.409 #32 NEW cov: 12135 ft: 14996 corp: 8/433b lim: 120 exec/s: 0 rss: 70Mb L: 47/119 MS: 1 EraseBytes- 00:06:59.409 [2024-05-15 12:27:43.985606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.985639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.409 [2024-05-15 12:27:43.985763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.409 [2024-05-15 12:27:43.985789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.409 #33 NEW cov: 12135 ft: 15013 corp: 9/481b lim: 120 exec/s: 0 rss: 70Mb L: 48/119 MS: 1 InsertByte- 00:06:59.667 [2024-05-15 12:27:44.046398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3110627431327784974 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.046432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.046503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.046530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.046657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.046681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.046813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.046838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.667 #36 NEW cov: 12135 ft: 15090 corp: 10/598b lim: 120 exec/s: 0 rss: 70Mb L: 117/119 MS: 3 InsertRepeatedBytes-InsertByte-CrossOver- 00:06:59.667 [2024-05-15 12:27:44.096617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3110627431550757675 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.096652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.096724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.096749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.096889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.096918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.097061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.097089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.667 #37 NEW cov: 12135 ft: 15236 corp: 11/717b lim: 120 exec/s: 0 rss: 70Mb L: 119/119 MS: 1 ChangeBit- 00:06:59.667 [2024-05-15 12:27:44.155874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12587190071073877678 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.155905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.667 #38 NEW cov: 12135 ft: 15367 corp: 12/754b lim: 120 exec/s: 0 rss: 70Mb L: 37/119 MS: 1 ShuffleBytes- 00:06:59.667 [2024-05-15 12:27:44.206314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.206346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.206447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.206475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.667 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:59.667 #39 NEW cov: 12158 ft: 15418 corp: 13/802b lim: 120 exec/s: 0 rss: 71Mb L: 48/119 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:59.667 [2024-05-15 12:27:44.267122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3110627431327784974 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.267158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.267231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.267256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.267385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.267406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.667 [2024-05-15 12:27:44.267553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3110627432037296939 len:11052 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.667 [2024-05-15 12:27:44.267578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.924 #40 NEW cov: 12158 ft: 15445 corp: 14/921b lim: 120 exec/s: 40 rss: 71Mb L: 119/119 MS: 1 CrossOver- 00:06:59.924 [2024-05-15 12:27:44.326488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.924 [2024-05-15 12:27:44.326514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.924 #41 NEW cov: 12158 ft: 15486 corp: 15/964b lim: 120 exec/s: 41 rss: 71Mb L: 43/119 MS: 1 EraseBytes- 00:06:59.925 [2024-05-15 12:27:44.386589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.925 [2024-05-15 12:27:44.386616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.925 #42 NEW cov: 12158 ft: 15512 corp: 16/1011b lim: 120 exec/s: 42 rss: 71Mb L: 47/119 MS: 1 CopyPart- 00:06:59.925 [2024-05-15 12:27:44.437128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.925 [2024-05-15 12:27:44.437161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.925 [2024-05-15 12:27:44.437279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.925 [2024-05-15 12:27:44.437298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.925 #43 NEW cov: 12158 ft: 15522 corp: 17/1059b lim: 120 exec/s: 43 rss: 71Mb L: 48/119 MS: 1 ShuffleBytes- 00:06:59.925 [2024-05-15 12:27:44.486919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726334638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.925 [2024-05-15 12:27:44.486951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.925 #44 NEW cov: 12158 ft: 15548 corp: 18/1104b lim: 120 exec/s: 44 rss: 71Mb L: 45/119 MS: 1 ChangeBit- 00:07:00.182 [2024-05-15 12:27:44.547161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726335150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.547188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.182 #45 NEW cov: 12158 ft: 15623 corp: 19/1150b lim: 120 exec/s: 45 rss: 71Mb L: 46/119 MS: 1 InsertByte- 00:07:00.182 [2024-05-15 12:27:44.597384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12587190071073877678 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.597415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.182 #46 NEW cov: 12158 ft: 15638 corp: 20/1193b lim: 120 exec/s: 46 rss: 71Mb L: 43/119 MS: 1 ShuffleBytes- 00:07:00.182 [2024-05-15 12:27:44.648131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069593804543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.648165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.648214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.648235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.648374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:12587190073825341102 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.648402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.182 #47 NEW cov: 12158 ft: 15657 corp: 21/1272b lim: 120 exec/s: 47 rss: 71Mb L: 79/119 MS: 1 CrossOver- 00:07:00.182 [2024-05-15 12:27:44.708614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726335150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.708647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.708729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12601545297637584558 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.708751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.708885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11936128518282651045 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.708912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.709046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11936128518282651045 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.709070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.182 #48 NEW cov: 12158 ft: 15669 corp: 22/1376b lim: 120 exec/s: 48 rss: 72Mb L: 104/119 MS: 1 InsertRepeatedBytes- 00:07:00.182 [2024-05-15 12:27:44.768844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726335150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.768877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.768936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.768953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.769083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11936128518434238117 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.769106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.182 [2024-05-15 12:27:44.769235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11936128518282651045 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.182 [2024-05-15 12:27:44.769260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.440 #49 NEW cov: 12158 ft: 15685 corp: 23/1494b lim: 120 exec/s: 49 rss: 72Mb L: 118/119 MS: 1 InsertRepeatedBytes- 00:07:00.440 [2024-05-15 12:27:44.828944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11936128515682774693 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.828976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.440 [2024-05-15 12:27:44.829053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11936128518282651045 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.829076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.440 [2024-05-15 12:27:44.829206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11936128518282651045 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.829232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.440 [2024-05-15 12:27:44.829373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11936128518282651045 len:42406 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.829401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.440 #50 NEW cov: 12158 ft: 15717 corp: 24/1598b lim: 120 exec/s: 50 rss: 72Mb L: 104/119 MS: 1 CopyPart- 00:07:00.440 [2024-05-15 12:27:44.878186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12587190290117209774 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.878217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.440 #51 NEW cov: 12158 ft: 15732 corp: 25/1644b lim: 120 exec/s: 51 rss: 72Mb L: 46/119 MS: 1 CopyPart- 00:07:00.440 [2024-05-15 12:27:44.928354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726335150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.928383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.440 #52 NEW cov: 12158 ft: 15762 corp: 26/1683b lim: 120 exec/s: 52 rss: 72Mb L: 39/119 MS: 1 EraseBytes- 00:07:00.440 [2024-05-15 12:27:44.978792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.978825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.440 [2024-05-15 12:27:44.978932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:44.978957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.440 #53 NEW cov: 12158 ft: 15780 corp: 27/1739b lim: 120 exec/s: 53 rss: 72Mb L: 56/119 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:00.440 [2024-05-15 12:27:45.039028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:45.039056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.440 [2024-05-15 12:27:45.039189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.440 [2024-05-15 12:27:45.039209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.698 #59 NEW cov: 12158 ft: 15789 corp: 28/1795b lim: 120 exec/s: 59 rss: 72Mb L: 56/119 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:00.698 [2024-05-15 12:27:45.099062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967040 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.698 [2024-05-15 12:27:45.099095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.698 [2024-05-15 12:27:45.099222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.698 [2024-05-15 12:27:45.099246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.698 #60 NEW cov: 12158 ft: 15823 corp: 29/1843b lim: 120 exec/s: 60 rss: 72Mb L: 48/119 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:00.698 [2024-05-15 12:27:45.149254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.698 [2024-05-15 12:27:45.149290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.698 [2024-05-15 12:27:45.149419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.698 [2024-05-15 12:27:45.149443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.698 #61 NEW cov: 12158 ft: 15837 corp: 30/1899b lim: 120 exec/s: 61 rss: 72Mb L: 56/119 MS: 1 ChangeBit- 00:07:00.698 [2024-05-15 12:27:45.199084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12587190290117209774 len:44719 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.698 [2024-05-15 12:27:45.199111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.698 #62 NEW cov: 12158 ft: 15848 corp: 31/1945b lim: 120 exec/s: 62 rss: 72Mb L: 46/119 MS: 1 ChangeBinInt- 00:07:00.698 [2024-05-15 12:27:45.259355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726334638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.698 [2024-05-15 12:27:45.259385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.698 #63 NEW cov: 12158 ft: 15860 corp: 32/1990b lim: 120 exec/s: 63 rss: 72Mb L: 45/119 MS: 1 ShuffleBytes- 00:07:00.956 [2024-05-15 12:27:45.320384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12586998008726335150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.956 [2024-05-15 12:27:45.320418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.956 [2024-05-15 12:27:45.320505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12601545297637584558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.956 [2024-05-15 12:27:45.320524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.956 [2024-05-15 12:27:45.320665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.956 [2024-05-15 12:27:45.320689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.956 [2024-05-15 12:27:45.320822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.956 [2024-05-15 12:27:45.320846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.956 #64 pulse cov: 12158 ft: 15882 corp: 32/1990b lim: 120 exec/s: 32 rss: 72Mb 00:07:00.956 #64 NEW cov: 12158 ft: 15882 corp: 33/2094b lim: 120 exec/s: 32 rss: 72Mb L: 104/119 MS: 1 InsertRepeatedBytes- 00:07:00.956 #64 DONE cov: 12158 ft: 15882 corp: 33/2094b lim: 120 exec/s: 32 rss: 72Mb 00:07:00.956 ###### Recommended dictionary. ###### 00:07:00.956 "\000\000\000\000\000\000\000\000" # Uses: 6 00:07:00.956 ###### End of recommended dictionary. ###### 00:07:00.956 Done 64 runs in 2 second(s) 00:07:00.956 [2024-05-15 12:27:45.342596] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.956 12:27:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:00.956 [2024-05-15 12:27:45.509086] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:00.956 [2024-05-15 12:27:45.509154] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408124 ] 00:07:00.956 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.214 [2024-05-15 12:27:45.682817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.214 [2024-05-15 12:27:45.747241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.214 [2024-05-15 12:27:45.806565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.214 [2024-05-15 12:27:45.822509] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:01.214 [2024-05-15 12:27:45.822883] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:01.472 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.472 INFO: Seed: 1694710683 00:07:01.472 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:01.472 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:01.472 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:01.472 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.472 #2 INITED exec/s: 0 rss: 63Mb 00:07:01.472 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.472 This may also happen if the target rejected all inputs we tried so far 00:07:01.472 [2024-05-15 12:27:45.871947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.472 [2024-05-15 12:27:45.871976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.472 [2024-05-15 12:27:45.872013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.472 [2024-05-15 12:27:45.872029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.472 [2024-05-15 12:27:45.872084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.472 [2024-05-15 12:27:45.872100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.731 NEW_FUNC[1/684]: 0x49f450 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:01.731 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.731 #13 NEW cov: 11856 ft: 11857 corp: 2/64b lim: 100 exec/s: 0 rss: 70Mb L: 63/63 MS: 1 InsertRepeatedBytes- 00:07:01.731 [2024-05-15 12:27:46.182517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.731 [2024-05-15 12:27:46.182552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.182617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.731 [2024-05-15 12:27:46.182636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.731 NEW_FUNC[1/1]: 0xf07900 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:290 00:07:01.731 #14 NEW cov: 11987 ft: 12721 corp: 3/113b lim: 100 exec/s: 0 rss: 70Mb L: 49/63 MS: 1 EraseBytes- 00:07:01.731 [2024-05-15 12:27:46.232812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.731 [2024-05-15 12:27:46.232840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.232889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.731 [2024-05-15 12:27:46.232902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.232952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.731 [2024-05-15 12:27:46.232967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.233017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.731 [2024-05-15 12:27:46.233031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.731 #15 NEW cov: 11993 ft: 13259 corp: 4/207b lim: 100 exec/s: 0 rss: 70Mb L: 94/94 MS: 1 CopyPart- 00:07:01.731 [2024-05-15 12:27:46.282669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.731 [2024-05-15 12:27:46.282695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.282727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.731 [2024-05-15 12:27:46.282741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.731 #16 NEW cov: 12078 ft: 13558 corp: 5/256b lim: 100 exec/s: 0 rss: 70Mb L: 49/94 MS: 1 CMP- DE: "\377\377\377\377\377\377\002\377"- 00:07:01.731 [2024-05-15 12:27:46.323055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.731 [2024-05-15 12:27:46.323081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.323146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.731 [2024-05-15 12:27:46.323160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.323211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.731 [2024-05-15 12:27:46.323224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.731 [2024-05-15 12:27:46.323275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.731 [2024-05-15 12:27:46.323289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.990 #17 NEW cov: 12078 ft: 13707 corp: 6/350b lim: 100 exec/s: 0 rss: 70Mb L: 94/94 MS: 1 ChangeBit- 00:07:01.990 [2024-05-15 12:27:46.373138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.990 [2024-05-15 12:27:46.373163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.373227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.990 [2024-05-15 12:27:46.373241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.373293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.990 [2024-05-15 12:27:46.373307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.373357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.990 [2024-05-15 12:27:46.373371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.990 #18 NEW cov: 12078 ft: 13766 corp: 7/444b lim: 100 exec/s: 0 rss: 70Mb L: 94/94 MS: 1 ShuffleBytes- 00:07:01.990 [2024-05-15 12:27:46.423072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.990 [2024-05-15 12:27:46.423096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.423129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.990 [2024-05-15 12:27:46.423143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.990 #19 NEW cov: 12078 ft: 13857 corp: 8/493b lim: 100 exec/s: 0 rss: 71Mb L: 49/94 MS: 1 ChangeBinInt- 00:07:01.990 [2024-05-15 12:27:46.473214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.990 [2024-05-15 12:27:46.473239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.473282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.990 [2024-05-15 12:27:46.473296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.990 #20 NEW cov: 12078 ft: 13868 corp: 9/542b lim: 100 exec/s: 0 rss: 71Mb L: 49/94 MS: 1 ChangeBinInt- 00:07:01.990 [2024-05-15 12:27:46.523346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.990 [2024-05-15 12:27:46.523372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.523414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.990 [2024-05-15 12:27:46.523428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.990 #26 NEW cov: 12078 ft: 13895 corp: 10/591b lim: 100 exec/s: 0 rss: 71Mb L: 49/94 MS: 1 ShuffleBytes- 00:07:01.990 [2024-05-15 12:27:46.563389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.990 [2024-05-15 12:27:46.563415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.990 #27 NEW cov: 12078 ft: 14239 corp: 11/626b lim: 100 exec/s: 0 rss: 71Mb L: 35/94 MS: 1 EraseBytes- 00:07:01.990 [2024-05-15 12:27:46.603847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:01.990 [2024-05-15 12:27:46.603872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.603920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:01.990 [2024-05-15 12:27:46.603933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.603981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:01.990 [2024-05-15 12:27:46.603995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.990 [2024-05-15 12:27:46.604050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:01.990 [2024-05-15 12:27:46.604064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.249 #28 NEW cov: 12078 ft: 14289 corp: 12/720b lim: 100 exec/s: 0 rss: 71Mb L: 94/94 MS: 1 ChangeBit- 00:07:02.249 [2024-05-15 12:27:46.653729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.249 [2024-05-15 12:27:46.653754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.653782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.249 [2024-05-15 12:27:46.653797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.249 #29 NEW cov: 12078 ft: 14305 corp: 13/769b lim: 100 exec/s: 0 rss: 71Mb L: 49/94 MS: 1 ChangeBinInt- 00:07:02.249 [2024-05-15 12:27:46.694071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.249 [2024-05-15 12:27:46.694096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.694145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.249 [2024-05-15 12:27:46.694158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.694206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.249 [2024-05-15 12:27:46.694220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.694270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.249 [2024-05-15 12:27:46.694283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.249 #30 NEW cov: 12078 ft: 14329 corp: 14/863b lim: 100 exec/s: 0 rss: 71Mb L: 94/94 MS: 1 ChangeBinInt- 00:07:02.249 [2024-05-15 12:27:46.734196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.249 [2024-05-15 12:27:46.734221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.734270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.249 [2024-05-15 12:27:46.734284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.734333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.249 [2024-05-15 12:27:46.734362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.734414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.249 [2024-05-15 12:27:46.734427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.249 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:02.249 #31 NEW cov: 12101 ft: 14340 corp: 15/945b lim: 100 exec/s: 0 rss: 71Mb L: 82/94 MS: 1 CopyPart- 00:07:02.249 [2024-05-15 12:27:46.774246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.249 [2024-05-15 12:27:46.774271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.774320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.249 [2024-05-15 12:27:46.774338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.774389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.249 [2024-05-15 12:27:46.774404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.774454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.249 [2024-05-15 12:27:46.774468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.249 #32 NEW cov: 12101 ft: 14388 corp: 16/1044b lim: 100 exec/s: 0 rss: 71Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:02.249 [2024-05-15 12:27:46.814189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.249 [2024-05-15 12:27:46.814213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.814267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.249 [2024-05-15 12:27:46.814281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.249 #33 NEW cov: 12101 ft: 14396 corp: 17/1093b lim: 100 exec/s: 0 rss: 71Mb L: 49/99 MS: 1 CrossOver- 00:07:02.249 [2024-05-15 12:27:46.854358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.249 [2024-05-15 12:27:46.854387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.854438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.249 [2024-05-15 12:27:46.854453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.249 [2024-05-15 12:27:46.854503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.249 [2024-05-15 12:27:46.854517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.507 #34 NEW cov: 12101 ft: 14424 corp: 18/1165b lim: 100 exec/s: 34 rss: 71Mb L: 72/99 MS: 1 EraseBytes- 00:07:02.507 [2024-05-15 12:27:46.904560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.507 [2024-05-15 12:27:46.904585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.904632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.507 [2024-05-15 12:27:46.904644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.904694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.507 [2024-05-15 12:27:46.904708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.507 #35 NEW cov: 12101 ft: 14457 corp: 19/1235b lim: 100 exec/s: 35 rss: 71Mb L: 70/99 MS: 1 CrossOver- 00:07:02.507 [2024-05-15 12:27:46.944659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.507 [2024-05-15 12:27:46.944685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.944726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.507 [2024-05-15 12:27:46.944739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.944788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.507 [2024-05-15 12:27:46.944805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.507 #36 NEW cov: 12101 ft: 14500 corp: 20/1307b lim: 100 exec/s: 36 rss: 72Mb L: 72/99 MS: 1 ChangeByte- 00:07:02.507 [2024-05-15 12:27:46.994901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.507 [2024-05-15 12:27:46.994926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.994976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.507 [2024-05-15 12:27:46.994989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.995036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.507 [2024-05-15 12:27:46.995066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:46.995115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.507 [2024-05-15 12:27:46.995129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.507 #37 NEW cov: 12101 ft: 14515 corp: 21/1401b lim: 100 exec/s: 37 rss: 72Mb L: 94/99 MS: 1 CopyPart- 00:07:02.507 [2024-05-15 12:27:47.044829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.507 [2024-05-15 12:27:47.044854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:47.044908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.507 [2024-05-15 12:27:47.044923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.507 #38 NEW cov: 12101 ft: 14531 corp: 22/1457b lim: 100 exec/s: 38 rss: 72Mb L: 56/99 MS: 1 CrossOver- 00:07:02.507 [2024-05-15 12:27:47.095323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.507 [2024-05-15 12:27:47.095348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:47.095419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.507 [2024-05-15 12:27:47.095432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:47.095482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.507 [2024-05-15 12:27:47.095497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:47.095546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.507 [2024-05-15 12:27:47.095560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.507 [2024-05-15 12:27:47.095612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:02.507 [2024-05-15 12:27:47.095626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.765 #39 NEW cov: 12101 ft: 14603 corp: 23/1557b lim: 100 exec/s: 39 rss: 72Mb L: 100/100 MS: 1 InsertByte- 00:07:02.766 [2024-05-15 12:27:47.145491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.766 [2024-05-15 12:27:47.145517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.145565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.766 [2024-05-15 12:27:47.145578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.145627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.766 [2024-05-15 12:27:47.145641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.145691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.766 [2024-05-15 12:27:47.145704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.145753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:02.766 [2024-05-15 12:27:47.145767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.766 #40 NEW cov: 12101 ft: 14647 corp: 24/1657b lim: 100 exec/s: 40 rss: 72Mb L: 100/100 MS: 1 ShuffleBytes- 00:07:02.766 [2024-05-15 12:27:47.195503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.766 [2024-05-15 12:27:47.195529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.195576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.766 [2024-05-15 12:27:47.195589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.195639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.766 [2024-05-15 12:27:47.195652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.195700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.766 [2024-05-15 12:27:47.195714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.766 #41 NEW cov: 12101 ft: 14676 corp: 25/1739b lim: 100 exec/s: 41 rss: 72Mb L: 82/100 MS: 1 CrossOver- 00:07:02.766 [2024-05-15 12:27:47.245634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.766 [2024-05-15 12:27:47.245659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.245706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.766 [2024-05-15 12:27:47.245719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.245769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.766 [2024-05-15 12:27:47.245798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.245849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.766 [2024-05-15 12:27:47.245863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.766 #42 NEW cov: 12101 ft: 14683 corp: 26/1833b lim: 100 exec/s: 42 rss: 72Mb L: 94/100 MS: 1 CrossOver- 00:07:02.766 [2024-05-15 12:27:47.285698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.766 [2024-05-15 12:27:47.285723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.285773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.766 [2024-05-15 12:27:47.285789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.285839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:02.766 [2024-05-15 12:27:47.285852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.285901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:02.766 [2024-05-15 12:27:47.285915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.766 #43 NEW cov: 12101 ft: 14689 corp: 27/1927b lim: 100 exec/s: 43 rss: 72Mb L: 94/100 MS: 1 ChangeByte- 00:07:02.766 [2024-05-15 12:27:47.325676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.766 [2024-05-15 12:27:47.325700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.325735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.766 [2024-05-15 12:27:47.325749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.766 #44 NEW cov: 12101 ft: 14696 corp: 28/1976b lim: 100 exec/s: 44 rss: 72Mb L: 49/100 MS: 1 CrossOver- 00:07:02.766 [2024-05-15 12:27:47.365741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:02.766 [2024-05-15 12:27:47.365766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.766 [2024-05-15 12:27:47.365815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:02.766 [2024-05-15 12:27:47.365830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.024 #45 NEW cov: 12101 ft: 14764 corp: 29/2033b lim: 100 exec/s: 45 rss: 73Mb L: 57/100 MS: 1 InsertByte- 00:07:03.024 [2024-05-15 12:27:47.415927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.024 [2024-05-15 12:27:47.415953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.024 [2024-05-15 12:27:47.415995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.024 [2024-05-15 12:27:47.416010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.024 #47 NEW cov: 12101 ft: 14774 corp: 30/2091b lim: 100 exec/s: 47 rss: 73Mb L: 58/100 MS: 2 CrossOver-CrossOver- 00:07:03.024 [2024-05-15 12:27:47.466228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.024 [2024-05-15 12:27:47.466255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.024 [2024-05-15 12:27:47.466303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.024 [2024-05-15 12:27:47.466317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.024 [2024-05-15 12:27:47.466367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:03.024 [2024-05-15 12:27:47.466388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.024 [2024-05-15 12:27:47.466439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:03.024 [2024-05-15 12:27:47.466452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.024 #48 NEW cov: 12101 ft: 14809 corp: 31/2188b lim: 100 exec/s: 48 rss: 73Mb L: 97/100 MS: 1 InsertRepeatedBytes- 00:07:03.024 [2024-05-15 12:27:47.516194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.024 [2024-05-15 12:27:47.516219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.024 [2024-05-15 12:27:47.516253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.025 [2024-05-15 12:27:47.516267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.025 #49 NEW cov: 12101 ft: 14811 corp: 32/2245b lim: 100 exec/s: 49 rss: 73Mb L: 57/100 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\002\377"- 00:07:03.025 [2024-05-15 12:27:47.566270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.025 [2024-05-15 12:27:47.566294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.025 [2024-05-15 12:27:47.566346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.025 [2024-05-15 12:27:47.566371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.025 #50 NEW cov: 12101 ft: 14819 corp: 33/2294b lim: 100 exec/s: 50 rss: 73Mb L: 49/100 MS: 1 ChangeBinInt- 00:07:03.025 [2024-05-15 12:27:47.606638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.025 [2024-05-15 12:27:47.606664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.025 [2024-05-15 12:27:47.606715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.025 [2024-05-15 12:27:47.606729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.025 [2024-05-15 12:27:47.606778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:03.025 [2024-05-15 12:27:47.606793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.025 [2024-05-15 12:27:47.606842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:03.025 [2024-05-15 12:27:47.606856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.025 #51 NEW cov: 12101 ft: 14841 corp: 34/2388b lim: 100 exec/s: 51 rss: 73Mb L: 94/100 MS: 1 ChangeBit- 00:07:03.284 [2024-05-15 12:27:47.646766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.284 [2024-05-15 12:27:47.646791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.646836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.284 [2024-05-15 12:27:47.646850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.646901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:03.284 [2024-05-15 12:27:47.646915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.646965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:03.284 [2024-05-15 12:27:47.646978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.284 #52 NEW cov: 12101 ft: 14855 corp: 35/2482b lim: 100 exec/s: 52 rss: 73Mb L: 94/100 MS: 1 ChangeBit- 00:07:03.284 [2024-05-15 12:27:47.686842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.284 [2024-05-15 12:27:47.686868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.686917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.284 [2024-05-15 12:27:47.686931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.686984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:03.284 [2024-05-15 12:27:47.686997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.687048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:03.284 [2024-05-15 12:27:47.687062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.284 #53 NEW cov: 12101 ft: 14862 corp: 36/2581b lim: 100 exec/s: 53 rss: 73Mb L: 99/100 MS: 1 CrossOver- 00:07:03.284 [2024-05-15 12:27:47.736837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.284 [2024-05-15 12:27:47.736861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.736893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.284 [2024-05-15 12:27:47.736908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.284 #59 NEW cov: 12101 ft: 14895 corp: 37/2634b lim: 100 exec/s: 59 rss: 73Mb L: 53/100 MS: 1 EraseBytes- 00:07:03.284 [2024-05-15 12:27:47.776889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.284 [2024-05-15 12:27:47.776914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.776948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.284 [2024-05-15 12:27:47.776963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.284 #66 NEW cov: 12101 ft: 14902 corp: 38/2691b lim: 100 exec/s: 66 rss: 73Mb L: 57/100 MS: 2 ShuffleBytes-CrossOver- 00:07:03.284 [2024-05-15 12:27:47.816900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.284 [2024-05-15 12:27:47.816925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.284 #69 NEW cov: 12101 ft: 14911 corp: 39/2716b lim: 100 exec/s: 69 rss: 73Mb L: 25/100 MS: 3 PersAutoDict-PersAutoDict-PersAutoDict- DE: "\377\377\377\377\377\377\002\377"-"\377\377\377\377\377\377\002\377"-"\377\377\377\377\377\377\002\377"- 00:07:03.284 [2024-05-15 12:27:47.857123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:03.284 [2024-05-15 12:27:47.857147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.284 [2024-05-15 12:27:47.857181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:03.284 [2024-05-15 12:27:47.857195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.284 #70 NEW cov: 12101 ft: 14933 corp: 40/2765b lim: 100 exec/s: 35 rss: 74Mb L: 49/100 MS: 1 ChangeBinInt- 00:07:03.284 #70 DONE cov: 12101 ft: 14933 corp: 40/2765b lim: 100 exec/s: 35 rss: 74Mb 00:07:03.284 ###### Recommended dictionary. ###### 00:07:03.284 "\377\377\377\377\377\377\002\377" # Uses: 4 00:07:03.284 ###### End of recommended dictionary. ###### 00:07:03.284 Done 70 runs in 2 second(s) 00:07:03.284 [2024-05-15 12:27:47.888111] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.543 12:27:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:03.543 [2024-05-15 12:27:48.056434] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:03.543 [2024-05-15 12:27:48.056504] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408652 ] 00:07:03.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.801 [2024-05-15 12:27:48.240144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.801 [2024-05-15 12:27:48.305124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.802 [2024-05-15 12:27:48.364960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.802 [2024-05-15 12:27:48.380916] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:03.802 [2024-05-15 12:27:48.381355] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:03.802 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.802 INFO: Seed: 4252674246 00:07:03.802 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:03.802 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:03.802 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:03.802 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.802 #2 INITED exec/s: 0 rss: 63Mb 00:07:03.802 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.802 This may also happen if the target rejected all inputs we tried so far 00:07:04.060 [2024-05-15 12:27:48.429945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.060 [2024-05-15 12:27:48.429978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.060 [2024-05-15 12:27:48.430010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:04.060 [2024-05-15 12:27:48.430025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.060 [2024-05-15 12:27:48.430076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.060 [2024-05-15 12:27:48.430091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.060 [2024-05-15 12:27:48.430141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.060 [2024-05-15 12:27:48.430155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.319 NEW_FUNC[1/685]: 0x4a2410 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:04.319 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.319 #10 NEW cov: 11820 ft: 11828 corp: 2/41b lim: 50 exec/s: 0 rss: 70Mb L: 40/40 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:04.319 [2024-05-15 12:27:48.761016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.319 [2024-05-15 12:27:48.761052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.761087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.319 [2024-05-15 12:27:48.761102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.761155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.319 [2024-05-15 12:27:48.761170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.761243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.319 [2024-05-15 12:27:48.761258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.319 #11 NEW cov: 11965 ft: 12449 corp: 3/82b lim: 50 exec/s: 0 rss: 70Mb L: 41/41 MS: 1 InsertByte- 00:07:04.319 [2024-05-15 12:27:48.810944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.319 [2024-05-15 12:27:48.810973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.811009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.319 [2024-05-15 12:27:48.811025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.811078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.319 [2024-05-15 12:27:48.811094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.811150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.319 [2024-05-15 12:27:48.811164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.319 #12 NEW cov: 11971 ft: 12752 corp: 4/129b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 CopyPart- 00:07:04.319 [2024-05-15 12:27:48.861019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.319 [2024-05-15 12:27:48.861047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.861087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:04.319 [2024-05-15 12:27:48.861103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.861154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.319 [2024-05-15 12:27:48.861170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.319 [2024-05-15 12:27:48.861224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5369525538728460184 len:34305 00:07:04.319 [2024-05-15 12:27:48.861239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.319 #13 NEW cov: 12056 ft: 13095 corp: 5/169b lim: 50 exec/s: 0 rss: 70Mb L: 40/47 MS: 1 CMP- DE: "?\230J\204c\007\206\000"- 00:07:04.320 [2024-05-15 12:27:48.901235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.320 [2024-05-15 12:27:48.901263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.320 [2024-05-15 12:27:48.901311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.320 [2024-05-15 12:27:48.901327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.320 [2024-05-15 12:27:48.901384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.320 [2024-05-15 12:27:48.901400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.320 [2024-05-15 12:27:48.901450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:04.320 [2024-05-15 12:27:48.901465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.320 #14 NEW cov: 12056 ft: 13159 corp: 6/216b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 ChangeBinInt- 00:07:04.578 [2024-05-15 12:27:48.951311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.578 [2024-05-15 12:27:48.951340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.578 [2024-05-15 12:27:48.951393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.578 [2024-05-15 12:27:48.951408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.578 [2024-05-15 12:27:48.951460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.579 [2024-05-15 12:27:48.951475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:48.951529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135442168908 len:19533 00:07:04.579 [2024-05-15 12:27:48.951545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.579 #15 NEW cov: 12056 ft: 13257 corp: 7/263b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 ChangeByte- 00:07:04.579 [2024-05-15 12:27:48.991476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.579 [2024-05-15 12:27:48.991504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:48.991554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.579 [2024-05-15 12:27:48.991569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:48.991619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.579 [2024-05-15 12:27:48.991634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:48.991683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:04.579 [2024-05-15 12:27:48.991698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.579 #16 NEW cov: 12056 ft: 13304 corp: 8/310b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 ShuffleBytes- 00:07:04.579 [2024-05-15 12:27:49.041304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553738 len:19533 00:07:04.579 [2024-05-15 12:27:49.041332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.041365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5488564461462375500 len:19533 00:07:04.579 [2024-05-15 12:27:49.041385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.579 #17 NEW cov: 12056 ft: 13718 corp: 9/333b lim: 50 exec/s: 0 rss: 70Mb L: 23/47 MS: 1 CrossOver- 00:07:04.579 [2024-05-15 12:27:49.091724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.579 [2024-05-15 12:27:49.091752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.091801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.579 [2024-05-15 12:27:49.091813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.091866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.579 [2024-05-15 12:27:49.091881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.091933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:04.579 [2024-05-15 12:27:49.091947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.579 #18 NEW cov: 12056 ft: 13736 corp: 10/380b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 ShuffleBytes- 00:07:04.579 [2024-05-15 12:27:49.131865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.579 [2024-05-15 12:27:49.131895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.131930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:04.579 [2024-05-15 12:27:49.131946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.131995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4416989225124908108 len:19533 00:07:04.579 [2024-05-15 12:27:49.132011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.132062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.579 [2024-05-15 12:27:49.132077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.579 #19 NEW cov: 12056 ft: 13785 corp: 11/427b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 CopyPart- 00:07:04.579 [2024-05-15 12:27:49.181975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.579 [2024-05-15 12:27:49.182002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.182069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489208068557900 len:43434 00:07:04.579 [2024-05-15 12:27:49.182083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.182136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140179020 len:19533 00:07:04.579 [2024-05-15 12:27:49.182151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.579 [2024-05-15 12:27:49.182202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.579 [2024-05-15 12:27:49.182218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.838 #20 NEW cov: 12056 ft: 13838 corp: 12/474b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:04.838 [2024-05-15 12:27:49.222103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.838 [2024-05-15 12:27:49.222130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.222179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:04.838 [2024-05-15 12:27:49.222194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.222245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497836643019410508 len:19533 00:07:04.838 [2024-05-15 12:27:49.222259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.222312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.838 [2024-05-15 12:27:49.222327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.838 #21 NEW cov: 12056 ft: 13849 corp: 13/523b lim: 50 exec/s: 0 rss: 70Mb L: 49/49 MS: 1 CopyPart- 00:07:04.838 [2024-05-15 12:27:49.272272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890937807621292 len:46004 00:07:04.838 [2024-05-15 12:27:49.272299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.272344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209796611148 len:43434 00:07:04.838 [2024-05-15 12:27:49.272360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.272416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140179020 len:19533 00:07:04.838 [2024-05-15 12:27:49.272432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.272486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.838 [2024-05-15 12:27:49.272500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.838 #22 NEW cov: 12056 ft: 13864 corp: 14/570b lim: 50 exec/s: 0 rss: 70Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:04.838 [2024-05-15 12:27:49.322255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890937807621292 len:46004 00:07:04.838 [2024-05-15 12:27:49.322282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.322323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209796611148 len:43434 00:07:04.838 [2024-05-15 12:27:49.322338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.322396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140179020 len:19533 00:07:04.838 [2024-05-15 12:27:49.322427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.838 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:04.838 #23 NEW cov: 12079 ft: 14124 corp: 15/608b lim: 50 exec/s: 0 rss: 71Mb L: 38/49 MS: 1 EraseBytes- 00:07:04.838 [2024-05-15 12:27:49.372554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:04.838 [2024-05-15 12:27:49.372580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.372630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:04.838 [2024-05-15 12:27:49.372645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.372698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19520 00:07:04.838 [2024-05-15 12:27:49.372713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.372765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:04.838 [2024-05-15 12:27:49.372779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.838 #24 NEW cov: 12079 ft: 14168 corp: 16/649b lim: 50 exec/s: 0 rss: 71Mb L: 41/49 MS: 1 ChangeByte- 00:07:04.838 [2024-05-15 12:27:49.412640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497822350894976076 len:19533 00:07:04.838 [2024-05-15 12:27:49.412670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.412721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5488564461462375500 len:19533 00:07:04.838 [2024-05-15 12:27:49.412736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.412790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:04.838 [2024-05-15 12:27:49.412804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.838 [2024-05-15 12:27:49.412860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5476377148162591820 len:1 00:07:04.838 [2024-05-15 12:27:49.412875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.838 #25 NEW cov: 12079 ft: 14228 corp: 17/697b lim: 50 exec/s: 25 rss: 71Mb L: 48/49 MS: 1 InsertByte- 00:07:05.097 [2024-05-15 12:27:49.462776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.097 [2024-05-15 12:27:49.462803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.097 [2024-05-15 12:27:49.462869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.097 [2024-05-15 12:27:49.462884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.097 [2024-05-15 12:27:49.462937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:05.097 [2024-05-15 12:27:49.462952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.097 [2024-05-15 12:27:49.463005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:05.097 [2024-05-15 12:27:49.463020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.097 #26 NEW cov: 12079 ft: 14270 corp: 18/744b lim: 50 exec/s: 26 rss: 71Mb L: 47/49 MS: 1 ShuffleBytes- 00:07:05.097 [2024-05-15 12:27:49.502884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.098 [2024-05-15 12:27:49.502911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.502959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853137589652556 len:19533 00:07:05.098 [2024-05-15 12:27:49.502974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.503025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4416989225124908108 len:19533 00:07:05.098 [2024-05-15 12:27:49.503041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.503093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:05.098 [2024-05-15 12:27:49.503107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.098 #27 NEW cov: 12079 ft: 14291 corp: 19/791b lim: 50 exec/s: 27 rss: 71Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:05.098 [2024-05-15 12:27:49.542869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890937807621292 len:46004 00:07:05.098 [2024-05-15 12:27:49.542894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.542928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209796611148 len:43434 00:07:05.098 [2024-05-15 12:27:49.542943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.542996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140162636 len:19533 00:07:05.098 [2024-05-15 12:27:49.543011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.098 #28 NEW cov: 12079 ft: 14382 corp: 20/829b lim: 50 exec/s: 28 rss: 71Mb L: 38/49 MS: 1 ChangeBit- 00:07:05.098 [2024-05-15 12:27:49.593151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.098 [2024-05-15 12:27:49.593179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.593242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.098 [2024-05-15 12:27:49.593258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.593310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5517837859040283724 len:19533 00:07:05.098 [2024-05-15 12:27:49.593326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.593377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:05.098 [2024-05-15 12:27:49.593397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.098 #29 NEW cov: 12079 ft: 14395 corp: 21/876b lim: 50 exec/s: 29 rss: 71Mb L: 47/49 MS: 1 ChangeByte- 00:07:05.098 [2024-05-15 12:27:49.633250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.098 [2024-05-15 12:27:49.633276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.633327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:05.098 [2024-05-15 12:27:49.633342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.633399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4416989225124908108 len:19520 00:07:05.098 [2024-05-15 12:27:49.633415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.633471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:542121134117258339 len:19533 00:07:05.098 [2024-05-15 12:27:49.633486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.098 #30 NEW cov: 12079 ft: 14410 corp: 22/923b lim: 50 exec/s: 30 rss: 71Mb L: 47/49 MS: 1 PersAutoDict- DE: "?\230J\204c\007\206\000"- 00:07:05.098 [2024-05-15 12:27:49.673347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:3149 00:07:05.098 [2024-05-15 12:27:49.673377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.673430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.098 [2024-05-15 12:27:49.673444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.673500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:05.098 [2024-05-15 12:27:49.673515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.673570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:05.098 [2024-05-15 12:27:49.673583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.098 #31 NEW cov: 12079 ft: 14416 corp: 23/970b lim: 50 exec/s: 31 rss: 71Mb L: 47/49 MS: 1 ChangeBit- 00:07:05.098 [2024-05-15 12:27:49.713456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.098 [2024-05-15 12:27:49.713483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.713533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489208068557900 len:43434 00:07:05.098 [2024-05-15 12:27:49.713548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.713603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140179020 len:19533 00:07:05.098 [2024-05-15 12:27:49.713620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.098 [2024-05-15 12:27:49.713674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:05.098 [2024-05-15 12:27:49.713687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.357 #32 NEW cov: 12079 ft: 14422 corp: 24/1018b lim: 50 exec/s: 32 rss: 71Mb L: 48/49 MS: 1 CopyPart- 00:07:05.357 [2024-05-15 12:27:49.753600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.357 [2024-05-15 12:27:49.753627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.753675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:05.357 [2024-05-15 12:27:49.753689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.753742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:05.357 [2024-05-15 12:27:49.753757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.753812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:4582494552925686860 len:25352 00:07:05.357 [2024-05-15 12:27:49.753828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.357 #33 NEW cov: 12079 ft: 14433 corp: 25/1060b lim: 50 exec/s: 33 rss: 71Mb L: 42/49 MS: 1 CopyPart- 00:07:05.357 [2024-05-15 12:27:49.803736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.357 [2024-05-15 12:27:49.803764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.803826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853137589652556 len:19533 00:07:05.357 [2024-05-15 12:27:49.803843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.803898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4416989225124908108 len:19533 00:07:05.357 [2024-05-15 12:27:49.803913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.803971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853852953365580 len:19533 00:07:05.357 [2024-05-15 12:27:49.803987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.357 #34 NEW cov: 12079 ft: 14451 corp: 26/1108b lim: 50 exec/s: 34 rss: 71Mb L: 48/49 MS: 1 InsertByte- 00:07:05.357 [2024-05-15 12:27:49.853743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890937807621292 len:46004 00:07:05.357 [2024-05-15 12:27:49.853771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.853811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209796611148 len:43434 00:07:05.357 [2024-05-15 12:27:49.853827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.853880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5479557261653969996 len:19533 00:07:05.357 [2024-05-15 12:27:49.853910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.357 #35 NEW cov: 12079 ft: 14475 corp: 27/1147b lim: 50 exec/s: 35 rss: 71Mb L: 39/49 MS: 1 InsertByte- 00:07:05.357 [2024-05-15 12:27:49.904008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.357 [2024-05-15 12:27:49.904035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.904084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952601161681996 len:19533 00:07:05.357 [2024-05-15 12:27:49.904098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.904150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:05.357 [2024-05-15 12:27:49.904166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.904222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:05.357 [2024-05-15 12:27:49.904237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.357 #36 NEW cov: 12079 ft: 14483 corp: 28/1194b lim: 50 exec/s: 36 rss: 71Mb L: 47/49 MS: 1 ChangeBit- 00:07:05.357 [2024-05-15 12:27:49.943839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.357 [2024-05-15 12:27:49.943865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.357 [2024-05-15 12:27:49.943921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:16281 00:07:05.357 [2024-05-15 12:27:49.943937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.357 #37 NEW cov: 12079 ft: 14528 corp: 29/1220b lim: 50 exec/s: 37 rss: 71Mb L: 26/49 MS: 1 EraseBytes- 00:07:05.617 [2024-05-15 12:27:49.984262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.617 [2024-05-15 12:27:49.984289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:49.984329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.617 [2024-05-15 12:27:49.984345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:49.984401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5506860334948568140 len:19533 00:07:05.617 [2024-05-15 12:27:49.984416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:49.984473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:05.617 [2024-05-15 12:27:49.984488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.617 #38 NEW cov: 12079 ft: 14571 corp: 30/1267b lim: 50 exec/s: 38 rss: 71Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:05.617 [2024-05-15 12:27:50.034397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.617 [2024-05-15 12:27:50.034426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:50.034474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853137589652556 len:19533 00:07:05.617 [2024-05-15 12:27:50.034489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:50.034543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4416989225124908108 len:19533 00:07:05.617 [2024-05-15 12:27:50.034558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:50.034613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853852953365580 len:19533 00:07:05.617 [2024-05-15 12:27:50.034629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.617 #39 NEW cov: 12079 ft: 14584 corp: 31/1315b lim: 50 exec/s: 39 rss: 72Mb L: 48/49 MS: 1 ShuffleBytes- 00:07:05.617 [2024-05-15 12:27:50.084653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890937807621292 len:46004 00:07:05.617 [2024-05-15 12:27:50.084684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:50.084722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209796611148 len:43434 00:07:05.617 [2024-05-15 12:27:50.084741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:50.084809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140179020 len:19533 00:07:05.617 [2024-05-15 12:27:50.084828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.617 [2024-05-15 12:27:50.084883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5497853135693827148 len:19533 00:07:05.617 [2024-05-15 12:27:50.084898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.617 #40 NEW cov: 12079 ft: 14594 corp: 32/1362b lim: 50 exec/s: 40 rss: 72Mb L: 47/49 MS: 1 CrossOver- 00:07:05.617 [2024-05-15 12:27:50.124676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.617 [2024-05-15 12:27:50.124706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.124741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.618 [2024-05-15 12:27:50.124757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.124811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497963086856604748 len:19533 00:07:05.618 [2024-05-15 12:27:50.124825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.124878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:05.618 [2024-05-15 12:27:50.124893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.618 #41 NEW cov: 12079 ft: 14603 corp: 33/1409b lim: 50 exec/s: 41 rss: 72Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:05.618 [2024-05-15 12:27:50.164720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.618 [2024-05-15 12:27:50.164748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.164789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.618 [2024-05-15 12:27:50.164804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.164858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:05.618 [2024-05-15 12:27:50.164873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.164928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:21392425922592844 len:1 00:07:05.618 [2024-05-15 12:27:50.164942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.618 #42 NEW cov: 12079 ft: 14610 corp: 34/1456b lim: 50 exec/s: 42 rss: 72Mb L: 47/49 MS: 1 ShuffleBytes- 00:07:05.618 [2024-05-15 12:27:50.204899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.618 [2024-05-15 12:27:50.204927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.204974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3119952532442205260 len:19533 00:07:05.618 [2024-05-15 12:27:50.204989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.205045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135693827148 len:19533 00:07:05.618 [2024-05-15 12:27:50.205064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.618 [2024-05-15 12:27:50.205120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1280068684 len:1 00:07:05.618 [2024-05-15 12:27:50.205137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.618 #43 NEW cov: 12079 ft: 14623 corp: 35/1503b lim: 50 exec/s: 43 rss: 72Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:05.877 [2024-05-15 12:27:50.244773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:742051930717310028 len:19533 00:07:05.877 [2024-05-15 12:27:50.244801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.244842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5488564461462375500 len:19533 00:07:05.877 [2024-05-15 12:27:50.244857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.877 #44 NEW cov: 12079 ft: 14649 corp: 36/1526b lim: 50 exec/s: 44 rss: 72Mb L: 23/49 MS: 1 ShuffleBytes- 00:07:05.877 [2024-05-15 12:27:50.295134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.877 [2024-05-15 12:27:50.295161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.295211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 00:07:05.877 [2024-05-15 12:27:50.295226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.295277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1280049152 len:48 00:07:05.877 [2024-05-15 12:27:50.295293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.295346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5729813508918103116 len:34305 00:07:05.877 [2024-05-15 12:27:50.295361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.877 #45 NEW cov: 12079 ft: 14665 corp: 37/1566b lim: 50 exec/s: 45 rss: 72Mb L: 40/49 MS: 1 CrossOver- 00:07:05.877 [2024-05-15 12:27:50.334938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.877 [2024-05-15 12:27:50.334965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.877 #46 NEW cov: 12079 ft: 14980 corp: 38/1584b lim: 50 exec/s: 46 rss: 72Mb L: 18/49 MS: 1 EraseBytes- 00:07:05.877 [2024-05-15 12:27:50.385308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12948890937807621292 len:46004 00:07:05.877 [2024-05-15 12:27:50.385335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.385387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209796611148 len:43434 00:07:05.877 [2024-05-15 12:27:50.385401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.385454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5497853135140162636 len:45236 00:07:05.877 [2024-05-15 12:27:50.385492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.877 #47 NEW cov: 12079 ft: 14983 corp: 39/1622b lim: 50 exec/s: 47 rss: 72Mb L: 38/49 MS: 1 ChangeBinInt- 00:07:05.877 [2024-05-15 12:27:50.425517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5497853137220553804 len:19533 00:07:05.877 [2024-05-15 12:27:50.425544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.425593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5497853137589652556 len:19533 00:07:05.877 [2024-05-15 12:27:50.425606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.425656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4416989225124908108 len:19533 00:07:05.877 [2024-05-15 12:27:50.425671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.877 [2024-05-15 12:27:50.425723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5529659808062131276 len:19533 00:07:05.877 [2024-05-15 12:27:50.425738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.877 #48 NEW cov: 12079 ft: 14994 corp: 40/1670b lim: 50 exec/s: 24 rss: 72Mb L: 48/49 MS: 1 CopyPart- 00:07:05.877 #48 DONE cov: 12079 ft: 14994 corp: 40/1670b lim: 50 exec/s: 24 rss: 72Mb 00:07:05.877 ###### Recommended dictionary. ###### 00:07:05.877 "?\230J\204c\007\206\000" # Uses: 1 00:07:05.877 ###### End of recommended dictionary. ###### 00:07:05.877 Done 48 runs in 2 second(s) 00:07:05.877 [2024-05-15 12:27:50.455907] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.136 12:27:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:06.136 [2024-05-15 12:27:50.625349] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:06.136 [2024-05-15 12:27:50.625427] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409001 ] 00:07:06.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.395 [2024-05-15 12:27:50.801627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.395 [2024-05-15 12:27:50.868416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.395 [2024-05-15 12:27:50.927741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.395 [2024-05-15 12:27:50.943693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:06.395 [2024-05-15 12:27:50.944134] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:06.395 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.395 INFO: Seed: 2522695463 00:07:06.395 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:06.395 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:06.395 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:06.395 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.395 #2 INITED exec/s: 0 rss: 63Mb 00:07:06.395 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.395 This may also happen if the target rejected all inputs we tried so far 00:07:06.395 [2024-05-15 12:27:51.010648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.395 [2024-05-15 12:27:51.010689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.395 [2024-05-15 12:27:51.010796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.395 [2024-05-15 12:27:51.010820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.395 [2024-05-15 12:27:51.010938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.395 [2024-05-15 12:27:51.010958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.395 [2024-05-15 12:27:51.011082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.395 [2024-05-15 12:27:51.011105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.911 NEW_FUNC[1/687]: 0x4a3fd0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:06.911 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.911 #8 NEW cov: 11880 ft: 11868 corp: 2/87b lim: 90 exec/s: 0 rss: 70Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:07:06.911 [2024-05-15 12:27:51.341151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.911 [2024-05-15 12:27:51.341193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.341306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.911 [2024-05-15 12:27:51.341329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.341460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.911 [2024-05-15 12:27:51.341484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.341601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.911 [2024-05-15 12:27:51.341624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.911 #9 NEW cov: 12023 ft: 12449 corp: 3/173b lim: 90 exec/s: 0 rss: 70Mb L: 86/86 MS: 1 ChangeBit- 00:07:06.911 [2024-05-15 12:27:51.391433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.911 [2024-05-15 12:27:51.391471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.391572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.911 [2024-05-15 12:27:51.391595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.391719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.911 [2024-05-15 12:27:51.391741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.391865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.911 [2024-05-15 12:27:51.391888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.911 #10 NEW cov: 12029 ft: 12668 corp: 4/259b lim: 90 exec/s: 0 rss: 70Mb L: 86/86 MS: 1 ChangeByte- 00:07:06.911 [2024-05-15 12:27:51.441486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.911 [2024-05-15 12:27:51.441514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-05-15 12:27:51.441590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.912 [2024-05-15 12:27:51.441613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.441726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.912 [2024-05-15 12:27:51.441744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.441867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.912 [2024-05-15 12:27:51.441891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.912 #12 NEW cov: 12114 ft: 13042 corp: 5/348b lim: 90 exec/s: 0 rss: 70Mb L: 89/89 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:06.912 [2024-05-15 12:27:51.481636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.912 [2024-05-15 12:27:51.481667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.481723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.912 [2024-05-15 12:27:51.481741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.481858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.912 [2024-05-15 12:27:51.481884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.481998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.912 [2024-05-15 12:27:51.482019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.912 #18 NEW cov: 12114 ft: 13166 corp: 6/434b lim: 90 exec/s: 0 rss: 70Mb L: 86/89 MS: 1 ChangeByte- 00:07:06.912 [2024-05-15 12:27:51.522021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:06.912 [2024-05-15 12:27:51.522052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.522141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:06.912 [2024-05-15 12:27:51.522161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.522280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:06.912 [2024-05-15 12:27:51.522303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.522415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:06.912 [2024-05-15 12:27:51.522442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.912 [2024-05-15 12:27:51.522561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:06.912 [2024-05-15 12:27:51.522583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.170 #19 NEW cov: 12114 ft: 13285 corp: 7/524b lim: 90 exec/s: 0 rss: 70Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:07.170 [2024-05-15 12:27:51.561385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.170 [2024-05-15 12:27:51.561417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.561489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.170 [2024-05-15 12:27:51.561512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.170 #20 NEW cov: 12114 ft: 13728 corp: 8/573b lim: 90 exec/s: 0 rss: 70Mb L: 49/90 MS: 1 EraseBytes- 00:07:07.170 [2024-05-15 12:27:51.612286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.170 [2024-05-15 12:27:51.612320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.612400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.170 [2024-05-15 12:27:51.612429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.612549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.170 [2024-05-15 12:27:51.612574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.612702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.170 [2024-05-15 12:27:51.612727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.612850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.170 [2024-05-15 12:27:51.612873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.170 #21 NEW cov: 12114 ft: 13809 corp: 9/663b lim: 90 exec/s: 0 rss: 70Mb L: 90/90 MS: 1 CopyPart- 00:07:07.170 [2024-05-15 12:27:51.652134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.170 [2024-05-15 12:27:51.652167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.652234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.170 [2024-05-15 12:27:51.652258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.652370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.170 [2024-05-15 12:27:51.652394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.652518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.170 [2024-05-15 12:27:51.652538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.170 #22 NEW cov: 12114 ft: 13826 corp: 10/752b lim: 90 exec/s: 0 rss: 70Mb L: 89/90 MS: 1 ShuffleBytes- 00:07:07.170 [2024-05-15 12:27:51.702265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.170 [2024-05-15 12:27:51.702293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.702349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.170 [2024-05-15 12:27:51.702370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.702499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.170 [2024-05-15 12:27:51.702519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.702644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.170 [2024-05-15 12:27:51.702668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.170 #23 NEW cov: 12114 ft: 13965 corp: 11/838b lim: 90 exec/s: 0 rss: 70Mb L: 86/90 MS: 1 ChangeBit- 00:07:07.170 [2024-05-15 12:27:51.752421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.170 [2024-05-15 12:27:51.752454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.752524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.170 [2024-05-15 12:27:51.752546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.752658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.170 [2024-05-15 12:27:51.752681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.170 [2024-05-15 12:27:51.752795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.170 [2024-05-15 12:27:51.752814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.170 #24 NEW cov: 12114 ft: 13987 corp: 12/927b lim: 90 exec/s: 0 rss: 70Mb L: 89/90 MS: 1 ChangeBit- 00:07:07.429 [2024-05-15 12:27:51.802610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.429 [2024-05-15 12:27:51.802646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.802730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.429 [2024-05-15 12:27:51.802753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.802869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.429 [2024-05-15 12:27:51.802893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.803013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.429 [2024-05-15 12:27:51.803031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.429 #25 NEW cov: 12114 ft: 14013 corp: 13/1014b lim: 90 exec/s: 0 rss: 70Mb L: 87/90 MS: 1 InsertByte- 00:07:07.429 [2024-05-15 12:27:51.842029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.429 [2024-05-15 12:27:51.842058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.429 #26 NEW cov: 12114 ft: 14856 corp: 14/1046b lim: 90 exec/s: 0 rss: 70Mb L: 32/90 MS: 1 CrossOver- 00:07:07.429 [2024-05-15 12:27:51.882779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.429 [2024-05-15 12:27:51.882805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.882885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.429 [2024-05-15 12:27:51.882904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.883017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.429 [2024-05-15 12:27:51.883040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.883171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.429 [2024-05-15 12:27:51.883194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.429 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:07.429 #27 NEW cov: 12137 ft: 14933 corp: 15/1135b lim: 90 exec/s: 0 rss: 70Mb L: 89/90 MS: 1 ChangeBit- 00:07:07.429 [2024-05-15 12:27:51.932730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.429 [2024-05-15 12:27:51.932762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.932862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.429 [2024-05-15 12:27:51.932886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.933005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.429 [2024-05-15 12:27:51.933029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.429 #34 NEW cov: 12137 ft: 15216 corp: 16/1195b lim: 90 exec/s: 0 rss: 71Mb L: 60/90 MS: 2 InsertByte-CrossOver- 00:07:07.429 [2024-05-15 12:27:51.973264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.429 [2024-05-15 12:27:51.973296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.973370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.429 [2024-05-15 12:27:51.973394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.973512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.429 [2024-05-15 12:27:51.973531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.973651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.429 [2024-05-15 12:27:51.973673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.429 [2024-05-15 12:27:51.973787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.429 [2024-05-15 12:27:51.973811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.429 #35 NEW cov: 12137 ft: 15227 corp: 17/1285b lim: 90 exec/s: 35 rss: 71Mb L: 90/90 MS: 1 CopyPart- 00:07:07.429 [2024-05-15 12:27:52.012727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.430 [2024-05-15 12:27:52.012758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.430 [2024-05-15 12:27:52.012889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.430 [2024-05-15 12:27:52.012914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.430 #39 NEW cov: 12137 ft: 15252 corp: 18/1337b lim: 90 exec/s: 39 rss: 71Mb L: 52/90 MS: 4 CopyPart-InsertByte-InsertByte-InsertRepeatedBytes- 00:07:07.688 [2024-05-15 12:27:52.053305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.688 [2024-05-15 12:27:52.053338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.053418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.688 [2024-05-15 12:27:52.053441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.053556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.688 [2024-05-15 12:27:52.053582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.053713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.688 [2024-05-15 12:27:52.053733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.688 #40 NEW cov: 12137 ft: 15263 corp: 19/1424b lim: 90 exec/s: 40 rss: 71Mb L: 87/90 MS: 1 InsertByte- 00:07:07.688 [2024-05-15 12:27:52.093479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.688 [2024-05-15 12:27:52.093509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.093565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.688 [2024-05-15 12:27:52.093586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.093715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.688 [2024-05-15 12:27:52.093734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.093864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.688 [2024-05-15 12:27:52.093885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.688 #41 NEW cov: 12137 ft: 15269 corp: 20/1511b lim: 90 exec/s: 41 rss: 71Mb L: 87/90 MS: 1 ChangeBinInt- 00:07:07.688 [2024-05-15 12:27:52.143597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.688 [2024-05-15 12:27:52.143628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.143698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.688 [2024-05-15 12:27:52.143721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.143845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.688 [2024-05-15 12:27:52.143866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.688 [2024-05-15 12:27:52.143986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.688 [2024-05-15 12:27:52.144010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.689 #42 NEW cov: 12137 ft: 15299 corp: 21/1600b lim: 90 exec/s: 42 rss: 71Mb L: 89/90 MS: 1 ShuffleBytes- 00:07:07.689 [2024-05-15 12:27:52.183691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.689 [2024-05-15 12:27:52.183723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.183796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.689 [2024-05-15 12:27:52.183821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.183943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.689 [2024-05-15 12:27:52.183962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.184076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.689 [2024-05-15 12:27:52.184098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.689 #43 NEW cov: 12137 ft: 15316 corp: 22/1688b lim: 90 exec/s: 43 rss: 71Mb L: 88/90 MS: 1 CrossOver- 00:07:07.689 [2024-05-15 12:27:52.233860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.689 [2024-05-15 12:27:52.233891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.233967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.689 [2024-05-15 12:27:52.233990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.234118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.689 [2024-05-15 12:27:52.234141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.234265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.689 [2024-05-15 12:27:52.234285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.689 #44 NEW cov: 12137 ft: 15319 corp: 23/1775b lim: 90 exec/s: 44 rss: 71Mb L: 87/90 MS: 1 CrossOver- 00:07:07.689 [2024-05-15 12:27:52.283846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.689 [2024-05-15 12:27:52.283878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.283919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.689 [2024-05-15 12:27:52.283939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 [2024-05-15 12:27:52.284052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.689 [2024-05-15 12:27:52.284074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.948 #45 NEW cov: 12137 ft: 15324 corp: 24/1835b lim: 90 exec/s: 45 rss: 71Mb L: 60/90 MS: 1 ChangeByte- 00:07:07.948 [2024-05-15 12:27:52.333869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.948 [2024-05-15 12:27:52.333900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.333954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.948 [2024-05-15 12:27:52.333977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.334097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.948 [2024-05-15 12:27:52.334119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.948 #46 NEW cov: 12137 ft: 15346 corp: 25/1895b lim: 90 exec/s: 46 rss: 71Mb L: 60/90 MS: 1 CrossOver- 00:07:07.948 [2024-05-15 12:27:52.384611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.948 [2024-05-15 12:27:52.384643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.384717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.948 [2024-05-15 12:27:52.384739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.384849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.948 [2024-05-15 12:27:52.384872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.384990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.948 [2024-05-15 12:27:52.385011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.385129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.948 [2024-05-15 12:27:52.385150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.948 #47 NEW cov: 12137 ft: 15361 corp: 26/1985b lim: 90 exec/s: 47 rss: 71Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:07.948 [2024-05-15 12:27:52.423922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.948 [2024-05-15 12:27:52.423957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.424072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.948 [2024-05-15 12:27:52.424102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.948 #48 NEW cov: 12137 ft: 15390 corp: 27/2037b lim: 90 exec/s: 48 rss: 71Mb L: 52/90 MS: 1 CopyPart- 00:07:07.948 [2024-05-15 12:27:52.474615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.948 [2024-05-15 12:27:52.474645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.474712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.948 [2024-05-15 12:27:52.474735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.474850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.948 [2024-05-15 12:27:52.474872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.474989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.948 [2024-05-15 12:27:52.475013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.948 #49 NEW cov: 12137 ft: 15402 corp: 28/2126b lim: 90 exec/s: 49 rss: 72Mb L: 89/90 MS: 1 CrossOver- 00:07:07.948 [2024-05-15 12:27:52.514999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.948 [2024-05-15 12:27:52.515029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.515099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.948 [2024-05-15 12:27:52.515122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.515243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:07.948 [2024-05-15 12:27:52.515262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.515385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:07.948 [2024-05-15 12:27:52.515403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.948 [2024-05-15 12:27:52.515527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:07.948 [2024-05-15 12:27:52.515551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.948 #50 NEW cov: 12137 ft: 15426 corp: 29/2216b lim: 90 exec/s: 50 rss: 72Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:07.949 [2024-05-15 12:27:52.554325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:07.949 [2024-05-15 12:27:52.554356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.949 [2024-05-15 12:27:52.554452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:07.949 [2024-05-15 12:27:52.554474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 #51 NEW cov: 12137 ft: 15446 corp: 30/2255b lim: 90 exec/s: 51 rss: 72Mb L: 39/90 MS: 1 CrossOver- 00:07:08.208 [2024-05-15 12:27:52.594884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.208 [2024-05-15 12:27:52.594912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.594984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.208 [2024-05-15 12:27:52.595003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.595119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.208 [2024-05-15 12:27:52.595142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.595263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.208 [2024-05-15 12:27:52.595279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.208 #57 NEW cov: 12137 ft: 15485 corp: 31/2344b lim: 90 exec/s: 57 rss: 72Mb L: 89/90 MS: 1 CrossOver- 00:07:08.208 [2024-05-15 12:27:52.635007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.208 [2024-05-15 12:27:52.635035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.635093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.208 [2024-05-15 12:27:52.635116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.635227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.208 [2024-05-15 12:27:52.635250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.635367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.208 [2024-05-15 12:27:52.635387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.208 #58 NEW cov: 12137 ft: 15487 corp: 32/2433b lim: 90 exec/s: 58 rss: 72Mb L: 89/90 MS: 1 CopyPart- 00:07:08.208 [2024-05-15 12:27:52.675130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.208 [2024-05-15 12:27:52.675160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.675221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.208 [2024-05-15 12:27:52.675241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.675358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.208 [2024-05-15 12:27:52.675384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.675500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.208 [2024-05-15 12:27:52.675520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.208 #59 NEW cov: 12137 ft: 15493 corp: 33/2522b lim: 90 exec/s: 59 rss: 72Mb L: 89/90 MS: 1 ChangeBit- 00:07:08.208 [2024-05-15 12:27:52.725524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.208 [2024-05-15 12:27:52.725552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.725609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.208 [2024-05-15 12:27:52.725630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.725743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.208 [2024-05-15 12:27:52.725764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.725875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.208 [2024-05-15 12:27:52.725896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.726017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:08.208 [2024-05-15 12:27:52.726036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.208 #60 NEW cov: 12137 ft: 15519 corp: 34/2612b lim: 90 exec/s: 60 rss: 72Mb L: 90/90 MS: 1 CrossOver- 00:07:08.208 [2024-05-15 12:27:52.775442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.208 [2024-05-15 12:27:52.775472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.775537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.208 [2024-05-15 12:27:52.775555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.775679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.208 [2024-05-15 12:27:52.775704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.775823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.208 [2024-05-15 12:27:52.775845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.208 #61 NEW cov: 12137 ft: 15543 corp: 35/2698b lim: 90 exec/s: 61 rss: 72Mb L: 86/90 MS: 1 ShuffleBytes- 00:07:08.208 [2024-05-15 12:27:52.815339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.208 [2024-05-15 12:27:52.815370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.815443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.208 [2024-05-15 12:27:52.815466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-05-15 12:27:52.815586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.208 [2024-05-15 12:27:52.815606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.467 #62 NEW cov: 12137 ft: 15574 corp: 36/2767b lim: 90 exec/s: 62 rss: 72Mb L: 69/90 MS: 1 CopyPart- 00:07:08.467 [2024-05-15 12:27:52.865687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.467 [2024-05-15 12:27:52.865719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.865781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.467 [2024-05-15 12:27:52.865797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.865914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.467 [2024-05-15 12:27:52.865937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.866053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.467 [2024-05-15 12:27:52.866074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.467 #63 NEW cov: 12137 ft: 15578 corp: 37/2856b lim: 90 exec/s: 63 rss: 72Mb L: 89/90 MS: 1 CrossOver- 00:07:08.467 [2024-05-15 12:27:52.915782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.467 [2024-05-15 12:27:52.915812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.915881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.467 [2024-05-15 12:27:52.915905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.916031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.467 [2024-05-15 12:27:52.916056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.916182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.467 [2024-05-15 12:27:52.916203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.467 #64 NEW cov: 12137 ft: 15582 corp: 38/2943b lim: 90 exec/s: 64 rss: 72Mb L: 87/90 MS: 1 ChangeBit- 00:07:08.467 [2024-05-15 12:27:52.956149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.467 [2024-05-15 12:27:52.956176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.467 [2024-05-15 12:27:52.956234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.467 [2024-05-15 12:27:52.956254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:52.956383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.468 [2024-05-15 12:27:52.956405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:52.956528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.468 [2024-05-15 12:27:52.956552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:52.956677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:08.468 [2024-05-15 12:27:52.956698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.468 #65 NEW cov: 12137 ft: 15587 corp: 39/3033b lim: 90 exec/s: 65 rss: 72Mb L: 90/90 MS: 1 ChangeByte- 00:07:08.468 [2024-05-15 12:27:53.006271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:08.468 [2024-05-15 12:27:53.006303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:53.006376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:08.468 [2024-05-15 12:27:53.006403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:53.006534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:08.468 [2024-05-15 12:27:53.006553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:53.006670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:08.468 [2024-05-15 12:27:53.006691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.468 [2024-05-15 12:27:53.006809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:08.468 [2024-05-15 12:27:53.006829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.468 #66 NEW cov: 12137 ft: 15591 corp: 40/3123b lim: 90 exec/s: 33 rss: 72Mb L: 90/90 MS: 1 ChangeBit- 00:07:08.468 #66 DONE cov: 12137 ft: 15591 corp: 40/3123b lim: 90 exec/s: 33 rss: 72Mb 00:07:08.468 Done 66 runs in 2 second(s) 00:07:08.468 [2024-05-15 12:27:53.026331] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:08.725 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:08.726 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:08.726 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:08.726 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.726 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.726 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.726 12:27:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:08.726 [2024-05-15 12:27:53.195201] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:08.726 [2024-05-15 12:27:53.195270] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409473 ] 00:07:08.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.997 [2024-05-15 12:27:53.374342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.998 [2024-05-15 12:27:53.445126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.998 [2024-05-15 12:27:53.505024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.998 [2024-05-15 12:27:53.520965] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:08.998 [2024-05-15 12:27:53.521318] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:08.998 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.998 INFO: Seed: 802729687 00:07:08.998 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:08.998 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:08.998 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:08.998 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.998 #2 INITED exec/s: 0 rss: 63Mb 00:07:08.998 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.998 This may also happen if the target rejected all inputs we tried so far 00:07:08.998 [2024-05-15 12:27:53.570151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.998 [2024-05-15 12:27:53.570182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.998 [2024-05-15 12:27:53.570218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:08.998 [2024-05-15 12:27:53.570233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.998 [2024-05-15 12:27:53.570288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:08.998 [2024-05-15 12:27:53.570305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.998 [2024-05-15 12:27:53.570359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:08.998 [2024-05-15 12:27:53.570373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.268 NEW_FUNC[1/687]: 0x4a71f0 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:09.268 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.526 #11 NEW cov: 11868 ft: 11860 corp: 2/45b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 4 ShuffleBytes-CrossOver-CrossOver-InsertRepeatedBytes- 00:07:09.526 [2024-05-15 12:27:53.912132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.526 [2024-05-15 12:27:53.912178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:53.912317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.526 [2024-05-15 12:27:53.912349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:53.912486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.526 [2024-05-15 12:27:53.912514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:53.912642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.526 [2024-05-15 12:27:53.912669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.526 #12 NEW cov: 11998 ft: 12740 corp: 3/89b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 CopyPart- 00:07:09.526 [2024-05-15 12:27:53.962117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.526 [2024-05-15 12:27:53.962153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:53.962208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.526 [2024-05-15 12:27:53.962230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:53.962346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.526 [2024-05-15 12:27:53.962369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:53.962501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.526 [2024-05-15 12:27:53.962525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.526 #13 NEW cov: 12004 ft: 12975 corp: 4/133b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ShuffleBytes- 00:07:09.526 [2024-05-15 12:27:54.012190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.526 [2024-05-15 12:27:54.012219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:54.012285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.526 [2024-05-15 12:27:54.012305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:54.012421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.526 [2024-05-15 12:27:54.012444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:54.012564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.526 [2024-05-15 12:27:54.012585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.526 #24 NEW cov: 12089 ft: 13220 corp: 5/177b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ChangeBinInt- 00:07:09.526 [2024-05-15 12:27:54.051733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.526 [2024-05-15 12:27:54.051759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.526 #27 NEW cov: 12089 ft: 14124 corp: 6/187b lim: 50 exec/s: 0 rss: 70Mb L: 10/44 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:09.526 [2024-05-15 12:27:54.092418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.526 [2024-05-15 12:27:54.092447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:54.092514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.526 [2024-05-15 12:27:54.092537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.526 [2024-05-15 12:27:54.092654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.527 [2024-05-15 12:27:54.092675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.527 [2024-05-15 12:27:54.092800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.527 [2024-05-15 12:27:54.092828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.527 #28 NEW cov: 12089 ft: 14217 corp: 7/231b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ChangeByte- 00:07:09.527 [2024-05-15 12:27:54.132594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.527 [2024-05-15 12:27:54.132622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.527 [2024-05-15 12:27:54.132688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.527 [2024-05-15 12:27:54.132706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.527 [2024-05-15 12:27:54.132821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.527 [2024-05-15 12:27:54.132839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.527 [2024-05-15 12:27:54.132964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.527 [2024-05-15 12:27:54.132984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.784 #33 NEW cov: 12089 ft: 14313 corp: 8/279b lim: 50 exec/s: 0 rss: 70Mb L: 48/48 MS: 5 ShuffleBytes-CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:09.784 [2024-05-15 12:27:54.172724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.784 [2024-05-15 12:27:54.172751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.172817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.784 [2024-05-15 12:27:54.172835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.172953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.784 [2024-05-15 12:27:54.172971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.173087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.784 [2024-05-15 12:27:54.173110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.784 #34 NEW cov: 12089 ft: 14372 corp: 9/323b lim: 50 exec/s: 0 rss: 70Mb L: 44/48 MS: 1 ChangeByte- 00:07:09.784 [2024-05-15 12:27:54.222838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.784 [2024-05-15 12:27:54.222867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.222926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.784 [2024-05-15 12:27:54.222948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.223071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.784 [2024-05-15 12:27:54.223093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.223206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.784 [2024-05-15 12:27:54.223230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.784 #35 NEW cov: 12089 ft: 14487 corp: 10/367b lim: 50 exec/s: 0 rss: 70Mb L: 44/48 MS: 1 ShuffleBytes- 00:07:09.784 [2024-05-15 12:27:54.273089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.784 [2024-05-15 12:27:54.273120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.273208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.784 [2024-05-15 12:27:54.273232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.784 [2024-05-15 12:27:54.273345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.785 [2024-05-15 12:27:54.273368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.785 [2024-05-15 12:27:54.273493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.785 [2024-05-15 12:27:54.273516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.785 #36 NEW cov: 12089 ft: 14519 corp: 11/411b lim: 50 exec/s: 0 rss: 70Mb L: 44/48 MS: 1 ChangeBinInt- 00:07:09.785 [2024-05-15 12:27:54.313096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.785 [2024-05-15 12:27:54.313129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.785 [2024-05-15 12:27:54.313200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:09.785 [2024-05-15 12:27:54.313224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.785 [2024-05-15 12:27:54.313344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:09.785 [2024-05-15 12:27:54.313366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.785 [2024-05-15 12:27:54.313489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:09.785 [2024-05-15 12:27:54.313511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.785 #37 NEW cov: 12089 ft: 14560 corp: 12/458b lim: 50 exec/s: 0 rss: 70Mb L: 47/48 MS: 1 InsertRepeatedBytes- 00:07:09.785 [2024-05-15 12:27:54.362605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:09.785 [2024-05-15 12:27:54.362630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.785 #38 NEW cov: 12089 ft: 14575 corp: 13/468b lim: 50 exec/s: 0 rss: 70Mb L: 10/48 MS: 1 ChangeBinInt- 00:07:10.043 [2024-05-15 12:27:54.413243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.043 [2024-05-15 12:27:54.413275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.413366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.043 [2024-05-15 12:27:54.413388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.413508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.043 [2024-05-15 12:27:54.413532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.043 #47 NEW cov: 12089 ft: 14960 corp: 14/506b lim: 50 exec/s: 0 rss: 70Mb L: 38/48 MS: 4 CopyPart-CrossOver-InsertByte-InsertRepeatedBytes- 00:07:10.043 [2024-05-15 12:27:54.453536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.043 [2024-05-15 12:27:54.453569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.453629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.043 [2024-05-15 12:27:54.453647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.453766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.043 [2024-05-15 12:27:54.453790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.453916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.043 [2024-05-15 12:27:54.453939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.043 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:10.043 #48 NEW cov: 12112 ft: 14986 corp: 15/552b lim: 50 exec/s: 0 rss: 70Mb L: 46/48 MS: 1 CopyPart- 00:07:10.043 [2024-05-15 12:27:54.493636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.043 [2024-05-15 12:27:54.493664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.493719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.043 [2024-05-15 12:27:54.493741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.493862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.043 [2024-05-15 12:27:54.493882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.494008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.043 [2024-05-15 12:27:54.494026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.043 #49 NEW cov: 12112 ft: 15002 corp: 16/600b lim: 50 exec/s: 0 rss: 70Mb L: 48/48 MS: 1 CMP- DE: "v\303\017\224]\177\000\000"- 00:07:10.043 [2024-05-15 12:27:54.543201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.043 [2024-05-15 12:27:54.543226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.043 #50 NEW cov: 12112 ft: 15016 corp: 17/618b lim: 50 exec/s: 50 rss: 70Mb L: 18/48 MS: 1 PersAutoDict- DE: "v\303\017\224]\177\000\000"- 00:07:10.043 [2024-05-15 12:27:54.594005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.043 [2024-05-15 12:27:54.594035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.594099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.043 [2024-05-15 12:27:54.594120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.594243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.043 [2024-05-15 12:27:54.594265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.043 [2024-05-15 12:27:54.594383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.043 [2024-05-15 12:27:54.594408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.043 #51 NEW cov: 12112 ft: 15023 corp: 18/662b lim: 50 exec/s: 51 rss: 70Mb L: 44/48 MS: 1 PersAutoDict- DE: "v\303\017\224]\177\000\000"- 00:07:10.043 [2024-05-15 12:27:54.633451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.043 [2024-05-15 12:27:54.633477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.043 #52 NEW cov: 12112 ft: 15122 corp: 19/678b lim: 50 exec/s: 52 rss: 70Mb L: 16/48 MS: 1 CrossOver- 00:07:10.302 [2024-05-15 12:27:54.674232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.302 [2024-05-15 12:27:54.674262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.674337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.302 [2024-05-15 12:27:54.674360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.674479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.302 [2024-05-15 12:27:54.674500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.674618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.302 [2024-05-15 12:27:54.674637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.302 #53 NEW cov: 12112 ft: 15147 corp: 20/722b lim: 50 exec/s: 53 rss: 70Mb L: 44/48 MS: 1 ChangeBit- 00:07:10.302 [2024-05-15 12:27:54.714287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.302 [2024-05-15 12:27:54.714317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.714406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.302 [2024-05-15 12:27:54.714427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.714548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.302 [2024-05-15 12:27:54.714572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.714700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.302 [2024-05-15 12:27:54.714721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.302 #54 NEW cov: 12112 ft: 15156 corp: 21/766b lim: 50 exec/s: 54 rss: 70Mb L: 44/48 MS: 1 ChangeBinInt- 00:07:10.302 [2024-05-15 12:27:54.764209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.302 [2024-05-15 12:27:54.764242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.764327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.302 [2024-05-15 12:27:54.764352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.764488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.302 [2024-05-15 12:27:54.764510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.302 #55 NEW cov: 12112 ft: 15178 corp: 22/797b lim: 50 exec/s: 55 rss: 70Mb L: 31/48 MS: 1 EraseBytes- 00:07:10.302 [2024-05-15 12:27:54.804558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.302 [2024-05-15 12:27:54.804585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.804648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.302 [2024-05-15 12:27:54.804670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.804794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.302 [2024-05-15 12:27:54.804817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.804938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.302 [2024-05-15 12:27:54.804962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.302 #56 NEW cov: 12112 ft: 15195 corp: 23/841b lim: 50 exec/s: 56 rss: 70Mb L: 44/48 MS: 1 CrossOver- 00:07:10.302 [2024-05-15 12:27:54.854024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.302 [2024-05-15 12:27:54.854048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.302 #57 NEW cov: 12112 ft: 15231 corp: 24/860b lim: 50 exec/s: 57 rss: 71Mb L: 19/48 MS: 1 InsertByte- 00:07:10.302 [2024-05-15 12:27:54.904845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.302 [2024-05-15 12:27:54.904874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.904942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.302 [2024-05-15 12:27:54.904965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.905082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.302 [2024-05-15 12:27:54.905102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.302 [2024-05-15 12:27:54.905231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.302 [2024-05-15 12:27:54.905252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.561 #58 NEW cov: 12112 ft: 15234 corp: 25/904b lim: 50 exec/s: 58 rss: 71Mb L: 44/48 MS: 1 ChangeBit- 00:07:10.561 [2024-05-15 12:27:54.944929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.561 [2024-05-15 12:27:54.944956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:54.945025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.561 [2024-05-15 12:27:54.945043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:54.945167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.561 [2024-05-15 12:27:54.945190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:54.945319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.561 [2024-05-15 12:27:54.945344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.561 #59 NEW cov: 12112 ft: 15265 corp: 26/948b lim: 50 exec/s: 59 rss: 71Mb L: 44/48 MS: 1 ShuffleBytes- 00:07:10.561 [2024-05-15 12:27:54.995098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.561 [2024-05-15 12:27:54.995126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:54.995194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.561 [2024-05-15 12:27:54.995211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:54.995340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.561 [2024-05-15 12:27:54.995359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:54.995495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.561 [2024-05-15 12:27:54.995517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.561 #60 NEW cov: 12112 ft: 15289 corp: 27/992b lim: 50 exec/s: 60 rss: 71Mb L: 44/48 MS: 1 CopyPart- 00:07:10.561 [2024-05-15 12:27:55.045221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.561 [2024-05-15 12:27:55.045252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.045329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.561 [2024-05-15 12:27:55.045351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.045474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.561 [2024-05-15 12:27:55.045497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.045619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.561 [2024-05-15 12:27:55.045644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.561 #61 NEW cov: 12112 ft: 15299 corp: 28/1039b lim: 50 exec/s: 61 rss: 71Mb L: 47/48 MS: 1 ShuffleBytes- 00:07:10.561 [2024-05-15 12:27:55.095428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.561 [2024-05-15 12:27:55.095457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.095525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.561 [2024-05-15 12:27:55.095548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.095666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.561 [2024-05-15 12:27:55.095691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.095821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.561 [2024-05-15 12:27:55.095848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.561 #62 NEW cov: 12112 ft: 15334 corp: 29/1083b lim: 50 exec/s: 62 rss: 71Mb L: 44/48 MS: 1 ChangeBit- 00:07:10.561 [2024-05-15 12:27:55.145129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.561 [2024-05-15 12:27:55.145161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.561 [2024-05-15 12:27:55.145286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.561 [2024-05-15 12:27:55.145303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.561 #63 NEW cov: 12112 ft: 15585 corp: 30/1104b lim: 50 exec/s: 63 rss: 71Mb L: 21/48 MS: 1 CopyPart- 00:07:10.820 [2024-05-15 12:27:55.185775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.820 [2024-05-15 12:27:55.185809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.185893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.820 [2024-05-15 12:27:55.185912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.186041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.820 [2024-05-15 12:27:55.186067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.186194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.820 [2024-05-15 12:27:55.186219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.820 #64 NEW cov: 12112 ft: 15635 corp: 31/1148b lim: 50 exec/s: 64 rss: 71Mb L: 44/48 MS: 1 ChangeByte- 00:07:10.820 [2024-05-15 12:27:55.225156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.820 [2024-05-15 12:27:55.225189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.820 #65 NEW cov: 12112 ft: 15642 corp: 32/1167b lim: 50 exec/s: 65 rss: 71Mb L: 19/48 MS: 1 InsertByte- 00:07:10.820 [2024-05-15 12:27:55.265696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.820 [2024-05-15 12:27:55.265728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.265832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.820 [2024-05-15 12:27:55.265862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.265977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.820 [2024-05-15 12:27:55.266001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.820 #66 NEW cov: 12112 ft: 15647 corp: 33/1197b lim: 50 exec/s: 66 rss: 71Mb L: 30/48 MS: 1 EraseBytes- 00:07:10.820 [2024-05-15 12:27:55.315949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.820 [2024-05-15 12:27:55.315979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.316079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.820 [2024-05-15 12:27:55.316105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.316218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.820 [2024-05-15 12:27:55.316248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.356192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.820 [2024-05-15 12:27:55.356223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.356297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.820 [2024-05-15 12:27:55.356317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.356437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.820 [2024-05-15 12:27:55.356459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.356581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.820 [2024-05-15 12:27:55.356599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.820 #68 NEW cov: 12112 ft: 15654 corp: 34/1243b lim: 50 exec/s: 68 rss: 71Mb L: 46/48 MS: 2 CrossOver-CopyPart- 00:07:10.820 [2024-05-15 12:27:55.396351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:10.820 [2024-05-15 12:27:55.396386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.396457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:10.820 [2024-05-15 12:27:55.396480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.396601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:10.820 [2024-05-15 12:27:55.396623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.820 [2024-05-15 12:27:55.396743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:10.820 [2024-05-15 12:27:55.396761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.820 #69 NEW cov: 12112 ft: 15661 corp: 35/1291b lim: 50 exec/s: 69 rss: 72Mb L: 48/48 MS: 1 ShuffleBytes- 00:07:11.080 [2024-05-15 12:27:55.446527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:11.080 [2024-05-15 12:27:55.446557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.446613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:11.080 [2024-05-15 12:27:55.446633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.446746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:11.080 [2024-05-15 12:27:55.446767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.446890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:11.080 [2024-05-15 12:27:55.446913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.080 #70 NEW cov: 12112 ft: 15668 corp: 36/1335b lim: 50 exec/s: 70 rss: 72Mb L: 44/48 MS: 1 ShuffleBytes- 00:07:11.080 [2024-05-15 12:27:55.496687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:11.080 [2024-05-15 12:27:55.496717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.496774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:11.080 [2024-05-15 12:27:55.496796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.496916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:11.080 [2024-05-15 12:27:55.496941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.497062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:11.080 [2024-05-15 12:27:55.497084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.080 #71 NEW cov: 12112 ft: 15693 corp: 37/1381b lim: 50 exec/s: 71 rss: 72Mb L: 46/48 MS: 1 ShuffleBytes- 00:07:11.080 [2024-05-15 12:27:55.547018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:11.080 [2024-05-15 12:27:55.547048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.547119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:11.080 [2024-05-15 12:27:55.547141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.547277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:11.080 [2024-05-15 12:27:55.547300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.547419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:11.080 [2024-05-15 12:27:55.547442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.080 [2024-05-15 12:27:55.547571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:11.080 [2024-05-15 12:27:55.547593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:11.080 #72 NEW cov: 12112 ft: 15748 corp: 38/1431b lim: 50 exec/s: 36 rss: 72Mb L: 50/50 MS: 1 CrossOver- 00:07:11.080 #72 DONE cov: 12112 ft: 15748 corp: 38/1431b lim: 50 exec/s: 36 rss: 72Mb 00:07:11.080 ###### Recommended dictionary. ###### 00:07:11.080 "v\303\017\224]\177\000\000" # Uses: 2 00:07:11.080 ###### End of recommended dictionary. ###### 00:07:11.080 Done 72 runs in 2 second(s) 00:07:11.080 [2024-05-15 12:27:55.568801] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:11.080 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:11.339 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:11.339 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.339 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.339 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.339 12:27:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:11.339 [2024-05-15 12:27:55.733686] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:11.339 [2024-05-15 12:27:55.733779] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410013 ] 00:07:11.339 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.339 [2024-05-15 12:27:55.910840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.598 [2024-05-15 12:27:55.975965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.598 [2024-05-15 12:27:56.035246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.598 [2024-05-15 12:27:56.051196] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:11.598 [2024-05-15 12:27:56.051579] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:11.598 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.598 INFO: Seed: 3334730324 00:07:11.598 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:11.598 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:11.598 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:11.598 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.598 #2 INITED exec/s: 0 rss: 63Mb 00:07:11.598 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.598 This may also happen if the target rejected all inputs we tried so far 00:07:11.598 [2024-05-15 12:27:56.106643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.598 [2024-05-15 12:27:56.106672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.856 NEW_FUNC[1/687]: 0x4a94b0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:11.856 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.856 #5 NEW cov: 11894 ft: 11895 corp: 2/32b lim: 85 exec/s: 0 rss: 70Mb L: 31/31 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:11.856 [2024-05-15 12:27:56.417411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.856 [2024-05-15 12:27:56.417452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.856 #6 NEW cov: 12024 ft: 12593 corp: 3/63b lim: 85 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 ChangeBit- 00:07:11.856 [2024-05-15 12:27:56.467469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:11.856 [2024-05-15 12:27:56.467497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.115 #12 NEW cov: 12030 ft: 12759 corp: 4/94b lim: 85 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 ChangeBinInt- 00:07:12.115 [2024-05-15 12:27:56.517735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.115 [2024-05-15 12:27:56.517762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.115 [2024-05-15 12:27:56.517793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.115 [2024-05-15 12:27:56.517807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.115 #13 NEW cov: 12115 ft: 13835 corp: 5/129b lim: 85 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\365\377\377\377"- 00:07:12.115 [2024-05-15 12:27:56.557913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.115 [2024-05-15 12:27:56.557943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.115 [2024-05-15 12:27:56.557979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.115 [2024-05-15 12:27:56.557994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.115 #14 NEW cov: 12115 ft: 13943 corp: 6/164b lim: 85 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:12.115 [2024-05-15 12:27:56.608116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.115 [2024-05-15 12:27:56.608142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.115 [2024-05-15 12:27:56.608178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.115 [2024-05-15 12:27:56.608193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.115 [2024-05-15 12:27:56.608244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.115 [2024-05-15 12:27:56.608258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.115 #15 NEW cov: 12115 ft: 14370 corp: 7/216b lim: 85 exec/s: 0 rss: 70Mb L: 52/52 MS: 1 CopyPart- 00:07:12.115 [2024-05-15 12:27:56.648099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.115 [2024-05-15 12:27:56.648126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.115 [2024-05-15 12:27:56.648158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.115 [2024-05-15 12:27:56.648173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.115 #16 NEW cov: 12115 ft: 14409 corp: 8/251b lim: 85 exec/s: 0 rss: 70Mb L: 35/52 MS: 1 PersAutoDict- DE: "\365\377\377\377"- 00:07:12.115 [2024-05-15 12:27:56.698167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.115 [2024-05-15 12:27:56.698194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.115 [2024-05-15 12:27:56.698226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.115 [2024-05-15 12:27:56.698240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.115 #17 NEW cov: 12115 ft: 14483 corp: 9/286b lim: 85 exec/s: 0 rss: 70Mb L: 35/52 MS: 1 ChangeBinInt- 00:07:12.373 [2024-05-15 12:27:56.748263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.373 [2024-05-15 12:27:56.748289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.373 #18 NEW cov: 12115 ft: 14555 corp: 10/317b lim: 85 exec/s: 0 rss: 70Mb L: 31/52 MS: 1 ChangeByte- 00:07:12.373 [2024-05-15 12:27:56.788601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.373 [2024-05-15 12:27:56.788629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.373 [2024-05-15 12:27:56.788666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.373 [2024-05-15 12:27:56.788681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.373 [2024-05-15 12:27:56.788731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.373 [2024-05-15 12:27:56.788746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.373 #19 NEW cov: 12115 ft: 14594 corp: 11/369b lim: 85 exec/s: 0 rss: 70Mb L: 52/52 MS: 1 PersAutoDict- DE: "\365\377\377\377"- 00:07:12.373 [2024-05-15 12:27:56.838638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.373 [2024-05-15 12:27:56.838666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.373 [2024-05-15 12:27:56.838714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.373 [2024-05-15 12:27:56.838729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.373 #20 NEW cov: 12115 ft: 14667 corp: 12/404b lim: 85 exec/s: 0 rss: 70Mb L: 35/52 MS: 1 CopyPart- 00:07:12.373 [2024-05-15 12:27:56.878869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.373 [2024-05-15 12:27:56.878896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.373 [2024-05-15 12:27:56.878952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.373 [2024-05-15 12:27:56.878968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.373 [2024-05-15 12:27:56.879019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.373 [2024-05-15 12:27:56.879035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.373 #21 NEW cov: 12115 ft: 14684 corp: 13/456b lim: 85 exec/s: 0 rss: 70Mb L: 52/52 MS: 1 ChangeBinInt- 00:07:12.373 [2024-05-15 12:27:56.928858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.373 [2024-05-15 12:27:56.928885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.373 [2024-05-15 12:27:56.928928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.374 [2024-05-15 12:27:56.928944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.374 #22 NEW cov: 12115 ft: 14722 corp: 14/491b lim: 85 exec/s: 0 rss: 70Mb L: 35/52 MS: 1 CopyPart- 00:07:12.374 [2024-05-15 12:27:56.979118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.374 [2024-05-15 12:27:56.979147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.374 [2024-05-15 12:27:56.979198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.374 [2024-05-15 12:27:56.979214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.374 [2024-05-15 12:27:56.979268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.374 [2024-05-15 12:27:56.979284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.630 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:12.630 #23 NEW cov: 12138 ft: 14771 corp: 15/543b lim: 85 exec/s: 0 rss: 71Mb L: 52/52 MS: 1 ChangeByte- 00:07:12.630 [2024-05-15 12:27:57.018961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.630 [2024-05-15 12:27:57.018988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.630 #24 NEW cov: 12138 ft: 14843 corp: 16/574b lim: 85 exec/s: 0 rss: 71Mb L: 31/52 MS: 1 ChangeByte- 00:07:12.630 [2024-05-15 12:27:57.059208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.630 [2024-05-15 12:27:57.059234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.630 [2024-05-15 12:27:57.059269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.630 [2024-05-15 12:27:57.059284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.630 #25 NEW cov: 12138 ft: 14865 corp: 17/608b lim: 85 exec/s: 25 rss: 71Mb L: 34/52 MS: 1 EraseBytes- 00:07:12.630 [2024-05-15 12:27:57.109345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.631 [2024-05-15 12:27:57.109372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.631 [2024-05-15 12:27:57.109415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.631 [2024-05-15 12:27:57.109431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.631 #26 NEW cov: 12138 ft: 14876 corp: 18/658b lim: 85 exec/s: 26 rss: 71Mb L: 50/52 MS: 1 InsertRepeatedBytes- 00:07:12.631 [2024-05-15 12:27:57.149479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.631 [2024-05-15 12:27:57.149506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.631 [2024-05-15 12:27:57.149539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.631 [2024-05-15 12:27:57.149553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.631 #27 NEW cov: 12138 ft: 14936 corp: 19/693b lim: 85 exec/s: 27 rss: 71Mb L: 35/52 MS: 1 PersAutoDict- DE: "\365\377\377\377"- 00:07:12.631 [2024-05-15 12:27:57.189594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.631 [2024-05-15 12:27:57.189621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.631 [2024-05-15 12:27:57.189654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.631 [2024-05-15 12:27:57.189669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.631 #28 NEW cov: 12138 ft: 14942 corp: 20/738b lim: 85 exec/s: 28 rss: 71Mb L: 45/52 MS: 1 InsertRepeatedBytes- 00:07:12.631 [2024-05-15 12:27:57.239855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.631 [2024-05-15 12:27:57.239881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.631 [2024-05-15 12:27:57.239927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.631 [2024-05-15 12:27:57.239941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.631 [2024-05-15 12:27:57.239991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.631 [2024-05-15 12:27:57.240022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.888 #29 NEW cov: 12138 ft: 14961 corp: 21/790b lim: 85 exec/s: 29 rss: 71Mb L: 52/52 MS: 1 ChangeBit- 00:07:12.888 [2024-05-15 12:27:57.290171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.888 [2024-05-15 12:27:57.290197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.888 [2024-05-15 12:27:57.290249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.888 [2024-05-15 12:27:57.290265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.888 [2024-05-15 12:27:57.290315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.888 [2024-05-15 12:27:57.290330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.888 [2024-05-15 12:27:57.290384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:12.888 [2024-05-15 12:27:57.290399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.888 #33 NEW cov: 12138 ft: 15318 corp: 22/862b lim: 85 exec/s: 33 rss: 71Mb L: 72/72 MS: 4 CrossOver-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:07:12.888 [2024-05-15 12:27:57.339890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.888 [2024-05-15 12:27:57.339917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.888 [2024-05-15 12:27:57.339950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.888 [2024-05-15 12:27:57.339965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.888 #34 NEW cov: 12138 ft: 15353 corp: 23/897b lim: 85 exec/s: 34 rss: 71Mb L: 35/72 MS: 1 ChangeByte- 00:07:12.888 [2024-05-15 12:27:57.380352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.888 [2024-05-15 12:27:57.380384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.888 [2024-05-15 12:27:57.380455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.888 [2024-05-15 12:27:57.380468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.889 [2024-05-15 12:27:57.380523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.889 [2024-05-15 12:27:57.380538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.889 #35 NEW cov: 12138 ft: 15368 corp: 24/949b lim: 85 exec/s: 35 rss: 71Mb L: 52/72 MS: 1 ChangeBinInt- 00:07:12.889 [2024-05-15 12:27:57.430389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.889 [2024-05-15 12:27:57.430416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.889 [2024-05-15 12:27:57.430463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.889 [2024-05-15 12:27:57.430477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.889 [2024-05-15 12:27:57.430529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:12.889 [2024-05-15 12:27:57.430543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.889 #36 NEW cov: 12138 ft: 15385 corp: 25/1001b lim: 85 exec/s: 36 rss: 71Mb L: 52/72 MS: 1 ShuffleBytes- 00:07:12.889 [2024-05-15 12:27:57.470338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:12.889 [2024-05-15 12:27:57.470363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.889 [2024-05-15 12:27:57.470404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:12.889 [2024-05-15 12:27:57.470419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.889 #37 NEW cov: 12138 ft: 15393 corp: 26/1036b lim: 85 exec/s: 37 rss: 71Mb L: 35/72 MS: 1 ChangeBit- 00:07:13.147 [2024-05-15 12:27:57.510485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.147 [2024-05-15 12:27:57.510511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.510545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.147 [2024-05-15 12:27:57.510560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.147 #38 NEW cov: 12138 ft: 15417 corp: 27/1077b lim: 85 exec/s: 38 rss: 71Mb L: 41/72 MS: 1 CopyPart- 00:07:13.147 [2024-05-15 12:27:57.550876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.147 [2024-05-15 12:27:57.550902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.550949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.147 [2024-05-15 12:27:57.550964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.551016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.147 [2024-05-15 12:27:57.551031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.551082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:13.147 [2024-05-15 12:27:57.551098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.147 #39 NEW cov: 12138 ft: 15427 corp: 28/1160b lim: 85 exec/s: 39 rss: 71Mb L: 83/83 MS: 1 CrossOver- 00:07:13.147 [2024-05-15 12:27:57.600672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.147 [2024-05-15 12:27:57.600697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.600733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.147 [2024-05-15 12:27:57.600753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.147 #40 NEW cov: 12138 ft: 15434 corp: 29/1195b lim: 85 exec/s: 40 rss: 71Mb L: 35/83 MS: 1 ShuffleBytes- 00:07:13.147 [2024-05-15 12:27:57.650841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.147 [2024-05-15 12:27:57.650867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.650900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.147 [2024-05-15 12:27:57.650915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.700984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.147 [2024-05-15 12:27:57.701010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.701043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.147 [2024-05-15 12:27:57.701057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.147 #42 NEW cov: 12138 ft: 15455 corp: 30/1236b lim: 85 exec/s: 42 rss: 71Mb L: 41/83 MS: 2 ShuffleBytes-ChangeBinInt- 00:07:13.147 [2024-05-15 12:27:57.741104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.147 [2024-05-15 12:27:57.741131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.147 [2024-05-15 12:27:57.741162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.147 [2024-05-15 12:27:57.741178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.405 #43 NEW cov: 12138 ft: 15467 corp: 31/1278b lim: 85 exec/s: 43 rss: 72Mb L: 42/83 MS: 1 InsertByte- 00:07:13.405 [2024-05-15 12:27:57.791243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.405 [2024-05-15 12:27:57.791269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:57.791327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.405 [2024-05-15 12:27:57.791342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.405 #44 NEW cov: 12138 ft: 15471 corp: 32/1313b lim: 85 exec/s: 44 rss: 72Mb L: 35/83 MS: 1 PersAutoDict- DE: "\365\377\377\377"- 00:07:13.405 [2024-05-15 12:27:57.831357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.405 [2024-05-15 12:27:57.831389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:57.831424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.405 [2024-05-15 12:27:57.831456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.405 #45 NEW cov: 12138 ft: 15493 corp: 33/1363b lim: 85 exec/s: 45 rss: 72Mb L: 50/83 MS: 1 ChangeBinInt- 00:07:13.405 [2024-05-15 12:27:57.871461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.405 [2024-05-15 12:27:57.871487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:57.871523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.405 [2024-05-15 12:27:57.871541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.405 #46 NEW cov: 12138 ft: 15540 corp: 34/1398b lim: 85 exec/s: 46 rss: 72Mb L: 35/83 MS: 1 ChangeBit- 00:07:13.405 [2024-05-15 12:27:57.921499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.405 [2024-05-15 12:27:57.921527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.405 #47 NEW cov: 12138 ft: 15551 corp: 35/1429b lim: 85 exec/s: 47 rss: 72Mb L: 31/83 MS: 1 CopyPart- 00:07:13.405 [2024-05-15 12:27:57.972037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.405 [2024-05-15 12:27:57.972064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:57.972103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.405 [2024-05-15 12:27:57.972118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:57.972172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.405 [2024-05-15 12:27:57.972187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:57.972243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:13.405 [2024-05-15 12:27:57.972258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.405 #48 NEW cov: 12138 ft: 15554 corp: 36/1505b lim: 85 exec/s: 48 rss: 72Mb L: 76/83 MS: 1 InsertRepeatedBytes- 00:07:13.405 [2024-05-15 12:27:58.011976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.405 [2024-05-15 12:27:58.012002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:58.012041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.405 [2024-05-15 12:27:58.012056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.405 [2024-05-15 12:27:58.012110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.405 [2024-05-15 12:27:58.012126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.665 #49 NEW cov: 12138 ft: 15565 corp: 37/1565b lim: 85 exec/s: 49 rss: 72Mb L: 60/83 MS: 1 CMP- DE: "\346\024\330\325h\007\206\000"- 00:07:13.665 [2024-05-15 12:27:58.052143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.665 [2024-05-15 12:27:58.052169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.665 [2024-05-15 12:27:58.052202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.665 [2024-05-15 12:27:58.052216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.665 [2024-05-15 12:27:58.052268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:13.665 [2024-05-15 12:27:58.052284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.665 #50 NEW cov: 12138 ft: 15577 corp: 38/1617b lim: 85 exec/s: 50 rss: 72Mb L: 52/83 MS: 1 ChangeBit- 00:07:13.665 [2024-05-15 12:27:58.102089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:13.665 [2024-05-15 12:27:58.102118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.665 [2024-05-15 12:27:58.102152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:13.665 [2024-05-15 12:27:58.102167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.665 #51 NEW cov: 12138 ft: 15588 corp: 39/1652b lim: 85 exec/s: 25 rss: 72Mb L: 35/83 MS: 1 ChangeBit- 00:07:13.665 #51 DONE cov: 12138 ft: 15588 corp: 39/1652b lim: 85 exec/s: 25 rss: 72Mb 00:07:13.665 ###### Recommended dictionary. ###### 00:07:13.665 "\365\377\377\377" # Uses: 4 00:07:13.665 "\346\024\330\325h\007\206\000" # Uses: 0 00:07:13.665 ###### End of recommended dictionary. ###### 00:07:13.665 Done 51 runs in 2 second(s) 00:07:13.665 [2024-05-15 12:27:58.133740] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.665 12:27:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:13.923 [2024-05-15 12:27:58.301971] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:13.923 [2024-05-15 12:27:58.302041] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410339 ] 00:07:13.923 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.923 [2024-05-15 12:27:58.484264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.181 [2024-05-15 12:27:58.550117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.181 [2024-05-15 12:27:58.609743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.181 [2024-05-15 12:27:58.625693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:14.181 [2024-05-15 12:27:58.626141] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:14.181 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.181 INFO: Seed: 1614763836 00:07:14.181 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:14.181 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:14.181 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:14.181 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.181 #2 INITED exec/s: 0 rss: 63Mb 00:07:14.181 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.181 This may also happen if the target rejected all inputs we tried so far 00:07:14.181 [2024-05-15 12:27:58.691813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.181 [2024-05-15 12:27:58.691856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.439 NEW_FUNC[1/685]: 0x4ac6e0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:14.439 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.439 #4 NEW cov: 11820 ft: 11828 corp: 2/9b lim: 25 exec/s: 0 rss: 70Mb L: 8/8 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:14.439 [2024-05-15 12:27:59.022742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.439 [2024-05-15 12:27:59.022782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.439 NEW_FUNC[1/1]: 0x12f85a0 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:07:14.439 #5 NEW cov: 11957 ft: 12514 corp: 3/17b lim: 25 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:14.697 [2024-05-15 12:27:59.083096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.698 [2024-05-15 12:27:59.083130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.698 [2024-05-15 12:27:59.083243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.698 [2024-05-15 12:27:59.083264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.698 #6 NEW cov: 11963 ft: 13157 corp: 4/30b lim: 25 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CrossOver- 00:07:14.698 [2024-05-15 12:27:59.143178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.698 [2024-05-15 12:27:59.143211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.698 [2024-05-15 12:27:59.143282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.698 [2024-05-15 12:27:59.143305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.698 #7 NEW cov: 12048 ft: 13434 corp: 5/43b lim: 25 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:07:14.698 [2024-05-15 12:27:59.203227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.698 [2024-05-15 12:27:59.203258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.698 #8 NEW cov: 12048 ft: 13547 corp: 6/51b lim: 25 exec/s: 0 rss: 70Mb L: 8/13 MS: 1 ChangeByte- 00:07:14.698 [2024-05-15 12:27:59.253356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.698 [2024-05-15 12:27:59.253386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.698 #14 NEW cov: 12048 ft: 13655 corp: 7/59b lim: 25 exec/s: 0 rss: 70Mb L: 8/13 MS: 1 ChangeBinInt- 00:07:14.698 [2024-05-15 12:27:59.313853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.698 [2024-05-15 12:27:59.313886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.698 [2024-05-15 12:27:59.313984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.698 [2024-05-15 12:27:59.314010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.956 #15 NEW cov: 12048 ft: 13703 corp: 8/72b lim: 25 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:07:14.956 [2024-05-15 12:27:59.373985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.956 [2024-05-15 12:27:59.374015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.374090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.956 [2024-05-15 12:27:59.374110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.956 #16 NEW cov: 12048 ft: 13751 corp: 9/85b lim: 25 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:07:14.956 [2024-05-15 12:27:59.424542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.956 [2024-05-15 12:27:59.424575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.424662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.956 [2024-05-15 12:27:59.424686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.424820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.956 [2024-05-15 12:27:59.424846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.424986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:14.956 [2024-05-15 12:27:59.425010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.956 #17 NEW cov: 12048 ft: 14303 corp: 10/105b lim: 25 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:14.956 [2024-05-15 12:27:59.474653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.956 [2024-05-15 12:27:59.474685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.474786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.956 [2024-05-15 12:27:59.474802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.474936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:14.956 [2024-05-15 12:27:59.474961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.475097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:14.956 [2024-05-15 12:27:59.475120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.956 #18 NEW cov: 12048 ft: 14381 corp: 11/129b lim: 25 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:14.956 [2024-05-15 12:27:59.534544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:14.956 [2024-05-15 12:27:59.534574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.956 [2024-05-15 12:27:59.534707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:14.956 [2024-05-15 12:27:59.534731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.957 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:14.957 #19 NEW cov: 12071 ft: 14401 corp: 12/141b lim: 25 exec/s: 0 rss: 71Mb L: 12/24 MS: 1 EraseBytes- 00:07:15.216 [2024-05-15 12:27:59.595168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.216 [2024-05-15 12:27:59.595202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.595331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.216 [2024-05-15 12:27:59.595358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.595497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.216 [2024-05-15 12:27:59.595520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.595661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:15.216 [2024-05-15 12:27:59.595685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.216 #20 NEW cov: 12071 ft: 14405 corp: 13/163b lim: 25 exec/s: 0 rss: 71Mb L: 22/24 MS: 1 InsertRepeatedBytes- 00:07:15.216 [2024-05-15 12:27:59.644846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.216 [2024-05-15 12:27:59.644879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.644960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.216 [2024-05-15 12:27:59.644986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.216 #26 NEW cov: 12071 ft: 14515 corp: 14/176b lim: 25 exec/s: 0 rss: 71Mb L: 13/24 MS: 1 ChangeBit- 00:07:15.216 [2024-05-15 12:27:59.694733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.216 [2024-05-15 12:27:59.694764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.216 #27 NEW cov: 12071 ft: 14525 corp: 15/184b lim: 25 exec/s: 27 rss: 71Mb L: 8/24 MS: 1 ShuffleBytes- 00:07:15.216 [2024-05-15 12:27:59.745496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.216 [2024-05-15 12:27:59.745530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.745622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.216 [2024-05-15 12:27:59.745645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.745781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.216 [2024-05-15 12:27:59.745803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.745945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:15.216 [2024-05-15 12:27:59.745969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.216 #28 NEW cov: 12071 ft: 14531 corp: 16/206b lim: 25 exec/s: 28 rss: 71Mb L: 22/24 MS: 1 CopyPart- 00:07:15.216 [2024-05-15 12:27:59.805514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.216 [2024-05-15 12:27:59.805550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.805652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.216 [2024-05-15 12:27:59.805677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.216 [2024-05-15 12:27:59.805807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.216 [2024-05-15 12:27:59.805834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.475 #29 NEW cov: 12071 ft: 14799 corp: 17/222b lim: 25 exec/s: 29 rss: 71Mb L: 16/24 MS: 1 CrossOver- 00:07:15.475 [2024-05-15 12:27:59.865892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.475 [2024-05-15 12:27:59.865924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.475 [2024-05-15 12:27:59.866013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.475 [2024-05-15 12:27:59.866037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.475 [2024-05-15 12:27:59.866168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.475 [2024-05-15 12:27:59.866207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.475 [2024-05-15 12:27:59.866334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:15.475 [2024-05-15 12:27:59.866356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.475 #30 NEW cov: 12071 ft: 14837 corp: 18/244b lim: 25 exec/s: 30 rss: 71Mb L: 22/24 MS: 1 ShuffleBytes- 00:07:15.475 [2024-05-15 12:27:59.925648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.476 [2024-05-15 12:27:59.925682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.476 [2024-05-15 12:27:59.925816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.476 [2024-05-15 12:27:59.925841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.476 #31 NEW cov: 12071 ft: 14895 corp: 19/257b lim: 25 exec/s: 31 rss: 71Mb L: 13/24 MS: 1 ChangeBinInt- 00:07:15.476 [2024-05-15 12:27:59.975823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.476 [2024-05-15 12:27:59.975855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.476 [2024-05-15 12:27:59.975969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.476 [2024-05-15 12:27:59.975994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.476 #32 NEW cov: 12071 ft: 14898 corp: 20/270b lim: 25 exec/s: 32 rss: 71Mb L: 13/24 MS: 1 ChangeByte- 00:07:15.476 [2024-05-15 12:28:00.026030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.476 [2024-05-15 12:28:00.026068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.476 [2024-05-15 12:28:00.026202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.476 [2024-05-15 12:28:00.026224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.476 #33 NEW cov: 12071 ft: 14902 corp: 21/282b lim: 25 exec/s: 33 rss: 71Mb L: 12/24 MS: 1 ChangeByte- 00:07:15.476 [2024-05-15 12:28:00.086024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.476 [2024-05-15 12:28:00.086056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.735 #34 NEW cov: 12071 ft: 14943 corp: 22/290b lim: 25 exec/s: 34 rss: 71Mb L: 8/24 MS: 1 ShuffleBytes- 00:07:15.735 [2024-05-15 12:28:00.146373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.735 [2024-05-15 12:28:00.146416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.146495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.735 [2024-05-15 12:28:00.146518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.735 #38 NEW cov: 12071 ft: 14958 corp: 23/302b lim: 25 exec/s: 38 rss: 71Mb L: 12/24 MS: 4 ChangeByte-ShuffleBytes-ShuffleBytes-CrossOver- 00:07:15.735 [2024-05-15 12:28:00.196977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.735 [2024-05-15 12:28:00.197010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.197076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.735 [2024-05-15 12:28:00.197098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.197239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.735 [2024-05-15 12:28:00.197265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.197397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:15.735 [2024-05-15 12:28:00.197420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.735 #39 NEW cov: 12071 ft: 14967 corp: 24/322b lim: 25 exec/s: 39 rss: 71Mb L: 20/24 MS: 1 ShuffleBytes- 00:07:15.735 [2024-05-15 12:28:00.247125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.735 [2024-05-15 12:28:00.247158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.247238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.735 [2024-05-15 12:28:00.247261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.247391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:15.735 [2024-05-15 12:28:00.247412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.735 [2024-05-15 12:28:00.247533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:15.735 [2024-05-15 12:28:00.247555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.735 #40 NEW cov: 12071 ft: 15047 corp: 25/344b lim: 25 exec/s: 40 rss: 71Mb L: 22/24 MS: 1 ChangeByte- 00:07:15.735 [2024-05-15 12:28:00.306733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.735 [2024-05-15 12:28:00.306760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.735 #41 NEW cov: 12071 ft: 15074 corp: 26/352b lim: 25 exec/s: 41 rss: 71Mb L: 8/24 MS: 1 ShuffleBytes- 00:07:15.993 [2024-05-15 12:28:00.367114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.993 [2024-05-15 12:28:00.367146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.993 [2024-05-15 12:28:00.367222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.993 [2024-05-15 12:28:00.367246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.993 #42 NEW cov: 12071 ft: 15085 corp: 27/365b lim: 25 exec/s: 42 rss: 71Mb L: 13/24 MS: 1 ChangeBinInt- 00:07:15.993 [2024-05-15 12:28:00.417123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.993 [2024-05-15 12:28:00.417152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.993 #43 NEW cov: 12071 ft: 15099 corp: 28/373b lim: 25 exec/s: 43 rss: 71Mb L: 8/24 MS: 1 CopyPart- 00:07:15.993 [2024-05-15 12:28:00.477450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.993 [2024-05-15 12:28:00.477482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.993 [2024-05-15 12:28:00.477584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.993 [2024-05-15 12:28:00.477609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.993 #44 NEW cov: 12071 ft: 15124 corp: 29/386b lim: 25 exec/s: 44 rss: 72Mb L: 13/24 MS: 1 ChangeBit- 00:07:15.993 [2024-05-15 12:28:00.537620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.993 [2024-05-15 12:28:00.537652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.993 [2024-05-15 12:28:00.537738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.993 [2024-05-15 12:28:00.537762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.993 #45 NEW cov: 12071 ft: 15131 corp: 30/399b lim: 25 exec/s: 45 rss: 72Mb L: 13/24 MS: 1 CopyPart- 00:07:15.993 [2024-05-15 12:28:00.597738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:15.993 [2024-05-15 12:28:00.597770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.993 [2024-05-15 12:28:00.597857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:15.993 [2024-05-15 12:28:00.597877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.252 #46 NEW cov: 12071 ft: 15152 corp: 31/412b lim: 25 exec/s: 46 rss: 72Mb L: 13/24 MS: 1 CopyPart- 00:07:16.252 [2024-05-15 12:28:00.648296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:16.252 [2024-05-15 12:28:00.648328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.252 [2024-05-15 12:28:00.648427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:16.252 [2024-05-15 12:28:00.648449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.252 [2024-05-15 12:28:00.648579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:16.252 [2024-05-15 12:28:00.648601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.252 [2024-05-15 12:28:00.648741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:16.252 [2024-05-15 12:28:00.648763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.252 #47 NEW cov: 12071 ft: 15168 corp: 32/432b lim: 25 exec/s: 23 rss: 72Mb L: 20/24 MS: 1 ChangeByte- 00:07:16.252 #47 DONE cov: 12071 ft: 15168 corp: 32/432b lim: 25 exec/s: 23 rss: 72Mb 00:07:16.252 Done 47 runs in 2 second(s) 00:07:16.252 [2024-05-15 12:28:00.680007] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.252 12:28:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:16.252 [2024-05-15 12:28:00.848409] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:16.252 [2024-05-15 12:28:00.848497] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410836 ] 00:07:16.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.511 [2024-05-15 12:28:01.023084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.511 [2024-05-15 12:28:01.088593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.769 [2024-05-15 12:28:01.147892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.769 [2024-05-15 12:28:01.163841] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:16.769 [2024-05-15 12:28:01.164221] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:16.769 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.769 INFO: Seed: 4152763686 00:07:16.769 INFO: Loaded 1 modules (353644 inline 8-bit counters): 353644 [0x293144c, 0x29879b8), 00:07:16.769 INFO: Loaded 1 PC tables (353644 PCs): 353644 [0x29879b8,0x2eed078), 00:07:16.769 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:16.769 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.769 #2 INITED exec/s: 0 rss: 63Mb 00:07:16.769 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.769 This may also happen if the target rejected all inputs we tried so far 00:07:16.769 [2024-05-15 12:28:01.229522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.769 [2024-05-15 12:28:01.229553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.769 [2024-05-15 12:28:01.229603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.769 [2024-05-15 12:28:01.229619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.028 NEW_FUNC[1/686]: 0x4ad7c0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:17.028 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.028 #16 NEW cov: 11895 ft: 11896 corp: 2/58b lim: 100 exec/s: 0 rss: 70Mb L: 57/57 MS: 4 InsertByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:17.028 [2024-05-15 12:28:01.560470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.028 [2024-05-15 12:28:01.560529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.028 [2024-05-15 12:28:01.560613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:288230376135000064 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.028 [2024-05-15 12:28:01.560643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.028 NEW_FUNC[1/1]: 0x1759f90 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:637 00:07:17.028 #17 NEW cov: 12029 ft: 12591 corp: 3/115b lim: 100 exec/s: 0 rss: 70Mb L: 57/57 MS: 1 ChangeBinInt- 00:07:17.028 [2024-05-15 12:28:01.620426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.028 [2024-05-15 12:28:01.620455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.028 [2024-05-15 12:28:01.620487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.028 [2024-05-15 12:28:01.620501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.028 #18 NEW cov: 12035 ft: 12893 corp: 4/172b lim: 100 exec/s: 0 rss: 70Mb L: 57/57 MS: 1 CMP- DE: "\037\000"- 00:07:17.287 [2024-05-15 12:28:01.660682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.660716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.660747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.660762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.660818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.660833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.287 #24 NEW cov: 12120 ft: 13524 corp: 5/233b lim: 100 exec/s: 0 rss: 70Mb L: 61/61 MS: 1 CMP- DE: "\000\000\000\001"- 00:07:17.287 [2024-05-15 12:28:01.710636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.710663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.710696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.710712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.287 #25 NEW cov: 12120 ft: 13606 corp: 6/286b lim: 100 exec/s: 0 rss: 70Mb L: 53/61 MS: 1 EraseBytes- 00:07:17.287 [2024-05-15 12:28:01.751109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.751136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.751201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.751218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.751274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10634005409016288147 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.751290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.751345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:10634005407197270931 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.751362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.287 #26 NEW cov: 12129 ft: 14049 corp: 7/373b lim: 100 exec/s: 0 rss: 70Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:17.287 [2024-05-15 12:28:01.800910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069586232831 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.287 [2024-05-15 12:28:01.800937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.287 [2024-05-15 12:28:01.800987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.288 [2024-05-15 12:28:01.801002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.288 #27 NEW cov: 12129 ft: 14083 corp: 8/431b lim: 100 exec/s: 0 rss: 70Mb L: 58/87 MS: 1 CrossOver- 00:07:17.288 [2024-05-15 12:28:01.841048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.288 [2024-05-15 12:28:01.841075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.288 [2024-05-15 12:28:01.841105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:288230376135000064 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.288 [2024-05-15 12:28:01.841121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.288 #28 NEW cov: 12129 ft: 14122 corp: 9/488b lim: 100 exec/s: 0 rss: 70Mb L: 57/87 MS: 1 ChangeByte- 00:07:17.288 [2024-05-15 12:28:01.891024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.288 [2024-05-15 12:28:01.891051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.546 #33 NEW cov: 12129 ft: 15017 corp: 10/512b lim: 100 exec/s: 0 rss: 70Mb L: 24/87 MS: 5 ShuffleBytes-InsertByte-ShuffleBytes-PersAutoDict-InsertRepeatedBytes- DE: "\037\000"- 00:07:17.546 [2024-05-15 12:28:01.931420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:01.931449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:01.931494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18374967958943301631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:01.931509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:01.931564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:01.931580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.547 #34 NEW cov: 12129 ft: 15076 corp: 11/573b lim: 100 exec/s: 0 rss: 70Mb L: 61/87 MS: 1 InsertRepeatedBytes- 00:07:17.547 [2024-05-15 12:28:01.971527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:31488 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:01.971554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:01.971599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463702539436031 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:01.971615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:01.971671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:01.971686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.547 #35 NEW cov: 12129 ft: 15117 corp: 12/635b lim: 100 exec/s: 0 rss: 70Mb L: 62/87 MS: 1 InsertByte- 00:07:17.547 [2024-05-15 12:28:02.021390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1048576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.021417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 #36 NEW cov: 12129 ft: 15171 corp: 13/659b lim: 100 exec/s: 0 rss: 70Mb L: 24/87 MS: 1 ChangeBit- 00:07:17.547 [2024-05-15 12:28:02.071819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.071848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:02.071882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18379471562865639423 len:1024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.071898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:02.071955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.071971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.547 NEW_FUNC[1/1]: 0x1a29d50 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:17.547 #37 NEW cov: 12152 ft: 15184 corp: 14/722b lim: 100 exec/s: 0 rss: 70Mb L: 63/87 MS: 1 CMP- DE: "\021\000"- 00:07:17.547 [2024-05-15 12:28:02.111796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.111822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:02.111857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18374967958943301631 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.111874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.547 #38 NEW cov: 12152 ft: 15220 corp: 15/765b lim: 100 exec/s: 0 rss: 70Mb L: 43/87 MS: 1 EraseBytes- 00:07:17.547 [2024-05-15 12:28:02.151918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069586232831 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.151946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.547 [2024-05-15 12:28:02.151991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.547 [2024-05-15 12:28:02.152006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.806 #39 NEW cov: 12152 ft: 15233 corp: 16/823b lim: 100 exec/s: 0 rss: 70Mb L: 58/87 MS: 1 ShuffleBytes- 00:07:17.806 [2024-05-15 12:28:02.202015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073696650751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.202043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.806 [2024-05-15 12:28:02.202076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.202092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.806 #40 NEW cov: 12152 ft: 15284 corp: 17/880b lim: 100 exec/s: 40 rss: 70Mb L: 57/87 MS: 1 ShuffleBytes- 00:07:17.806 [2024-05-15 12:28:02.242140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.242167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.806 [2024-05-15 12:28:02.242203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:288230376135000064 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.242221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.806 #41 NEW cov: 12152 ft: 15293 corp: 18/939b lim: 100 exec/s: 41 rss: 71Mb L: 59/87 MS: 1 PersAutoDict- DE: "\021\000"- 00:07:17.806 [2024-05-15 12:28:02.282241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069586232829 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.282269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.806 [2024-05-15 12:28:02.282332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.282349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.806 #42 NEW cov: 12152 ft: 15405 corp: 19/997b lim: 100 exec/s: 42 rss: 71Mb L: 58/87 MS: 1 ChangeBinInt- 00:07:17.806 [2024-05-15 12:28:02.332427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1048576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.332453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.806 [2024-05-15 12:28:02.332489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.332504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.806 #43 NEW cov: 12152 ft: 15443 corp: 20/1039b lim: 100 exec/s: 43 rss: 71Mb L: 42/87 MS: 1 InsertRepeatedBytes- 00:07:17.806 [2024-05-15 12:28:02.382730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.382757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.806 [2024-05-15 12:28:02.382806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463702539436031 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.382822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.806 [2024-05-15 12:28:02.382894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.806 [2024-05-15 12:28:02.382910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.806 #44 NEW cov: 12152 ft: 15449 corp: 21/1101b lim: 100 exec/s: 44 rss: 71Mb L: 62/87 MS: 1 ShuffleBytes- 00:07:18.065 [2024-05-15 12:28:02.433012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.433039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.433091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463702539436031 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.433106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.433161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073696650751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.433176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.433235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.433251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.065 #45 NEW cov: 12152 ft: 15492 corp: 22/1194b lim: 100 exec/s: 45 rss: 71Mb L: 93/93 MS: 1 CrossOver- 00:07:18.065 [2024-05-15 12:28:02.483064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.483091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.483138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.483153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.483211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.483226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.065 #46 NEW cov: 12152 ft: 15514 corp: 23/1269b lim: 100 exec/s: 46 rss: 71Mb L: 75/93 MS: 1 InsertRepeatedBytes- 00:07:18.065 [2024-05-15 12:28:02.533289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.533317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.533389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.533405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.533460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10634005409016288147 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.533476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.533532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:10634005407197270931 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.533547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.065 #47 NEW cov: 12152 ft: 15524 corp: 24/1356b lim: 100 exec/s: 47 rss: 71Mb L: 87/93 MS: 1 ChangeByte- 00:07:18.065 [2024-05-15 12:28:02.583292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.583319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.583366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18374967958929407999 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.583385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.583442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.583458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.065 #48 NEW cov: 12152 ft: 15553 corp: 25/1417b lim: 100 exec/s: 48 rss: 72Mb L: 61/93 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:18.065 [2024-05-15 12:28:02.633268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406872832 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.633295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.065 [2024-05-15 12:28:02.633330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446742982787858431 len:1024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.065 [2024-05-15 12:28:02.633345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.065 #49 NEW cov: 12152 ft: 15566 corp: 26/1462b lim: 100 exec/s: 49 rss: 72Mb L: 45/93 MS: 1 PersAutoDict- DE: "\037\000"- 00:07:18.324 [2024-05-15 12:28:02.683561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.683589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.324 [2024-05-15 12:28:02.683638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463702539436031 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.683654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.324 [2024-05-15 12:28:02.683712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.683727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.324 #50 NEW cov: 12152 ft: 15581 corp: 27/1524b lim: 100 exec/s: 50 rss: 72Mb L: 62/93 MS: 1 ShuffleBytes- 00:07:18.324 [2024-05-15 12:28:02.723541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.723568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.324 [2024-05-15 12:28:02.723604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.723620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.324 #51 NEW cov: 12152 ft: 15589 corp: 28/1570b lim: 100 exec/s: 51 rss: 72Mb L: 46/93 MS: 1 EraseBytes- 00:07:18.324 [2024-05-15 12:28:02.763659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18432107371617976319 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.763687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.324 [2024-05-15 12:28:02.763751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:288230376135000064 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.763767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.324 #52 NEW cov: 12152 ft: 15621 corp: 29/1627b lim: 100 exec/s: 52 rss: 72Mb L: 57/93 MS: 1 ChangeByte- 00:07:18.324 [2024-05-15 12:28:02.803655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1048576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.803680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.324 #53 NEW cov: 12152 ft: 15683 corp: 30/1649b lim: 100 exec/s: 53 rss: 72Mb L: 22/93 MS: 1 EraseBytes- 00:07:18.324 [2024-05-15 12:28:02.844032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.844059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.324 [2024-05-15 12:28:02.844102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.844117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.324 [2024-05-15 12:28:02.844173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.844189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.324 #54 NEW cov: 12152 ft: 15687 corp: 31/1724b lim: 100 exec/s: 54 rss: 72Mb L: 75/93 MS: 1 ChangeBinInt- 00:07:18.324 [2024-05-15 12:28:02.894200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069586232829 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.324 [2024-05-15 12:28:02.894227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.325 [2024-05-15 12:28:02.894262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.325 [2024-05-15 12:28:02.894278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.325 [2024-05-15 12:28:02.894336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.325 [2024-05-15 12:28:02.894367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.325 #55 NEW cov: 12152 ft: 15699 corp: 32/1784b lim: 100 exec/s: 55 rss: 72Mb L: 60/93 MS: 1 CrossOver- 00:07:18.583 [2024-05-15 12:28:02.944175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069586232829 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:02.944202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.583 [2024-05-15 12:28:02.944233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:02.944249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.583 #56 NEW cov: 12152 ft: 15716 corp: 33/1836b lim: 100 exec/s: 56 rss: 72Mb L: 52/93 MS: 1 EraseBytes- 00:07:18.583 [2024-05-15 12:28:02.984243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16782920193482106367 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:02.984271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.583 [2024-05-15 12:28:02.984306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:02.984323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.583 [2024-05-15 12:28:03.024332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16782920193482106367 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:03.024362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.583 [2024-05-15 12:28:03.024402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072598451848 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:03.024418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.583 #58 NEW cov: 12152 ft: 15730 corp: 34/1894b lim: 100 exec/s: 58 rss: 72Mb L: 58/93 MS: 2 CrossOver-CMP- DE: "\001\206\007k\275\305\366\210"- 00:07:18.583 [2024-05-15 12:28:03.064598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.583 [2024-05-15 12:28:03.064625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.584 [2024-05-15 12:28:03.064689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.064704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.584 [2024-05-15 12:28:03.064772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.064787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.584 #59 NEW cov: 12152 ft: 15744 corp: 35/1955b lim: 100 exec/s: 59 rss: 72Mb L: 61/93 MS: 1 CrossOver- 00:07:18.584 [2024-05-15 12:28:03.104726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.104753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.584 [2024-05-15 12:28:03.104798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18374967958929407999 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.104814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.584 [2024-05-15 12:28:03.104869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.104884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.584 #60 NEW cov: 12152 ft: 15808 corp: 36/2018b lim: 100 exec/s: 60 rss: 72Mb L: 63/93 MS: 1 PersAutoDict- DE: "\021\000"- 00:07:18.584 [2024-05-15 12:28:03.154683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5425512963627502411 len:19276 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.154708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.584 [2024-05-15 12:28:03.154760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5425512962855750475 len:19276 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.154774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.584 #62 NEW cov: 12152 ft: 15809 corp: 37/2076b lim: 100 exec/s: 62 rss: 72Mb L: 58/93 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:18.584 [2024-05-15 12:28:03.194797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744070406930431 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.194823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.584 [2024-05-15 12:28:03.194861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:288230376135000064 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.584 [2024-05-15 12:28:03.194876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.841 #63 NEW cov: 12152 ft: 15821 corp: 38/2133b lim: 100 exec/s: 31 rss: 72Mb L: 57/93 MS: 1 ChangeBit- 00:07:18.841 #63 DONE cov: 12152 ft: 15821 corp: 38/2133b lim: 100 exec/s: 31 rss: 72Mb 00:07:18.841 ###### Recommended dictionary. ###### 00:07:18.841 "\037\000" # Uses: 2 00:07:18.841 "\000\000\000\001" # Uses: 1 00:07:18.841 "\021\000" # Uses: 2 00:07:18.841 "\001\206\007k\275\305\366\210" # Uses: 0 00:07:18.841 ###### End of recommended dictionary. ###### 00:07:18.841 Done 63 runs in 2 second(s) 00:07:18.841 [2024-05-15 12:28:03.214524] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:18.841 12:28:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.841 12:28:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.841 12:28:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.841 12:28:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:18.841 00:07:18.841 real 1m3.939s 00:07:18.841 user 1m40.175s 00:07:18.841 sys 0m7.010s 00:07:18.841 12:28:03 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:18.841 12:28:03 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:18.841 ************************************ 00:07:18.841 END TEST nvmf_fuzz 00:07:18.841 ************************************ 00:07:18.841 12:28:03 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:18.841 12:28:03 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:18.841 12:28:03 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:18.841 12:28:03 llvm_fuzz -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:18.841 12:28:03 llvm_fuzz -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:18.841 12:28:03 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:18.841 ************************************ 00:07:18.841 START TEST vfio_fuzz 00:07:18.841 ************************************ 00:07:18.841 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:19.103 * Looking for test storage... 00:07:19.103 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:19.103 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:19.103 #define SPDK_CONFIG_H 00:07:19.103 #define SPDK_CONFIG_APPS 1 00:07:19.103 #define SPDK_CONFIG_ARCH native 00:07:19.103 #undef SPDK_CONFIG_ASAN 00:07:19.103 #undef SPDK_CONFIG_AVAHI 00:07:19.103 #undef SPDK_CONFIG_CET 00:07:19.103 #define SPDK_CONFIG_COVERAGE 1 00:07:19.103 #define SPDK_CONFIG_CROSS_PREFIX 00:07:19.103 #undef SPDK_CONFIG_CRYPTO 00:07:19.103 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:19.104 #undef SPDK_CONFIG_CUSTOMOCF 00:07:19.104 #undef SPDK_CONFIG_DAOS 00:07:19.104 #define SPDK_CONFIG_DAOS_DIR 00:07:19.104 #define SPDK_CONFIG_DEBUG 1 00:07:19.104 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:19.104 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:19.104 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:19.104 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:19.104 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:19.104 #undef SPDK_CONFIG_DPDK_UADK 00:07:19.104 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:19.104 #define SPDK_CONFIG_EXAMPLES 1 00:07:19.104 #undef SPDK_CONFIG_FC 00:07:19.104 #define SPDK_CONFIG_FC_PATH 00:07:19.104 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:19.104 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:19.104 #undef SPDK_CONFIG_FUSE 00:07:19.104 #define SPDK_CONFIG_FUZZER 1 00:07:19.104 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:19.104 #undef SPDK_CONFIG_GOLANG 00:07:19.104 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:19.104 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:19.104 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:19.104 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:19.104 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:19.104 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:19.104 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:19.104 #define SPDK_CONFIG_IDXD 1 00:07:19.104 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:19.104 #undef SPDK_CONFIG_IPSEC_MB 00:07:19.104 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:19.104 #define SPDK_CONFIG_ISAL 1 00:07:19.104 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:19.104 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:19.104 #define SPDK_CONFIG_LIBDIR 00:07:19.104 #undef SPDK_CONFIG_LTO 00:07:19.104 #define SPDK_CONFIG_MAX_LCORES 00:07:19.104 #define SPDK_CONFIG_NVME_CUSE 1 00:07:19.104 #undef SPDK_CONFIG_OCF 00:07:19.104 #define SPDK_CONFIG_OCF_PATH 00:07:19.104 #define SPDK_CONFIG_OPENSSL_PATH 00:07:19.104 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:19.104 #define SPDK_CONFIG_PGO_DIR 00:07:19.104 #undef SPDK_CONFIG_PGO_USE 00:07:19.104 #define SPDK_CONFIG_PREFIX /usr/local 00:07:19.104 #undef SPDK_CONFIG_RAID5F 00:07:19.104 #undef SPDK_CONFIG_RBD 00:07:19.104 #define SPDK_CONFIG_RDMA 1 00:07:19.104 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:19.104 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:19.104 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:19.104 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:19.104 #undef SPDK_CONFIG_SHARED 00:07:19.104 #undef SPDK_CONFIG_SMA 00:07:19.104 #define SPDK_CONFIG_TESTS 1 00:07:19.104 #undef SPDK_CONFIG_TSAN 00:07:19.104 #define SPDK_CONFIG_UBLK 1 00:07:19.104 #define SPDK_CONFIG_UBSAN 1 00:07:19.104 #undef SPDK_CONFIG_UNIT_TESTS 00:07:19.104 #undef SPDK_CONFIG_URING 00:07:19.104 #define SPDK_CONFIG_URING_PATH 00:07:19.104 #undef SPDK_CONFIG_URING_ZNS 00:07:19.104 #undef SPDK_CONFIG_USDT 00:07:19.104 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:19.104 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:19.104 #define SPDK_CONFIG_VFIO_USER 1 00:07:19.104 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:19.104 #define SPDK_CONFIG_VHOST 1 00:07:19.104 #define SPDK_CONFIG_VIRTIO 1 00:07:19.104 #undef SPDK_CONFIG_VTUNE 00:07:19.104 #define SPDK_CONFIG_VTUNE_DIR 00:07:19.104 #define SPDK_CONFIG_WERROR 1 00:07:19.104 #define SPDK_CONFIG_WPDK_DIR 00:07:19.104 #undef SPDK_CONFIG_XNVME 00:07:19.104 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # : 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:19.104 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # : 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # : 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # : 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # : 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:19.105 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2411396 ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # kill -0 2411396 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.wzpsM2 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.wzpsM2/tests/vfio /tmp/spdk.wzpsM2 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=968024064 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4316405760 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=52255887360 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=9486417920 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866440192 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342489088 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5971968 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869540864 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1613824 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:19.106 * Looking for test storage... 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # target_space=52255887360 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # new_size=11701010432 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:19.106 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:19.106 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # set -o errtrace 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1684 -- # true 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1686 -- # xtrace_fd 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:19.107 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:19.107 12:28:03 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:19.366 [2024-05-15 12:28:03.730144] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:19.366 [2024-05-15 12:28:03.730217] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411445 ] 00:07:19.366 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.366 [2024-05-15 12:28:03.802667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.366 [2024-05-15 12:28:03.873706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.625 [2024-05-15 12:28:04.038522] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:19.625 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.625 INFO: Seed: 2732802638 00:07:19.625 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:19.625 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:19.625 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:19.625 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.625 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.625 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.625 This may also happen if the target rejected all inputs we tried so far 00:07:19.625 [2024-05-15 12:28:04.115300] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:20.141 NEW_FUNC[1/646]: 0x481740 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:20.141 NEW_FUNC[2/646]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:20.141 #46 NEW cov: 10917 ft: 10763 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 4 InsertRepeatedBytes-ChangeBit-ShuffleBytes-InsertByte- 00:07:20.141 #52 NEW cov: 10931 ft: 13661 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:20.399 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:20.399 #58 NEW cov: 10948 ft: 14851 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:07:20.657 #59 NEW cov: 10948 ft: 15130 corp: 5/25b lim: 6 exec/s: 59 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:07:20.915 #60 NEW cov: 10948 ft: 17009 corp: 6/31b lim: 6 exec/s: 60 rss: 73Mb L: 6/6 MS: 1 ChangeBit- 00:07:20.915 #61 NEW cov: 10948 ft: 17833 corp: 7/37b lim: 6 exec/s: 61 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:21.173 #62 NEW cov: 10948 ft: 18184 corp: 8/43b lim: 6 exec/s: 62 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:21.431 #68 NEW cov: 10948 ft: 18281 corp: 9/49b lim: 6 exec/s: 68 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:21.431 #69 NEW cov: 10955 ft: 18341 corp: 10/55b lim: 6 exec/s: 69 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:21.689 #70 NEW cov: 10965 ft: 18446 corp: 11/61b lim: 6 exec/s: 35 rss: 74Mb L: 6/6 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:21.689 #70 DONE cov: 10965 ft: 18446 corp: 11/61b lim: 6 exec/s: 35 rss: 74Mb 00:07:21.689 ###### Recommended dictionary. ###### 00:07:21.689 "\000\000\000\000" # Uses: 0 00:07:21.689 ###### End of recommended dictionary. ###### 00:07:21.689 Done 70 runs in 2 second(s) 00:07:21.689 [2024-05-15 12:28:06.215579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:21.689 [2024-05-15 12:28:06.265587] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:21.946 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.946 12:28:06 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:21.946 [2024-05-15 12:28:06.506910] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:21.946 [2024-05-15 12:28:06.506999] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411977 ] 00:07:21.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.204 [2024-05-15 12:28:06.581784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.204 [2024-05-15 12:28:06.654128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.204 [2024-05-15 12:28:06.821586] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:22.463 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.463 INFO: Seed: 1220841090 00:07:22.463 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:22.463 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:22.463 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:22.463 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.463 #2 INITED exec/s: 0 rss: 65Mb 00:07:22.463 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.463 This may also happen if the target rejected all inputs we tried so far 00:07:22.463 [2024-05-15 12:28:06.891941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:22.463 [2024-05-15 12:28:06.935434] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.463 [2024-05-15 12:28:06.935455] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.463 [2024-05-15 12:28:06.935472] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:22.720 NEW_FUNC[1/648]: 0x481ce0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:22.720 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:22.720 #17 NEW cov: 10912 ft: 10806 corp: 2/5b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 5 ShuffleBytes-CopyPart-ShuffleBytes-InsertByte-InsertByte- 00:07:22.977 [2024-05-15 12:28:07.390606] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.977 [2024-05-15 12:28:07.390639] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.977 [2024-05-15 12:28:07.390657] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:22.977 #18 NEW cov: 10931 ft: 13733 corp: 3/9b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:07:22.977 [2024-05-15 12:28:07.556246] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:22.977 [2024-05-15 12:28:07.556269] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:22.977 [2024-05-15 12:28:07.556286] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.233 #19 NEW cov: 10931 ft: 15226 corp: 4/13b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:23.233 [2024-05-15 12:28:07.722258] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.233 [2024-05-15 12:28:07.722280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.233 [2024-05-15 12:28:07.722299] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.233 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:23.233 #20 NEW cov: 10948 ft: 15645 corp: 5/17b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:23.490 [2024-05-15 12:28:07.888237] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.490 [2024-05-15 12:28:07.888259] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.490 [2024-05-15 12:28:07.888276] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.490 #26 NEW cov: 10948 ft: 16027 corp: 6/21b lim: 4 exec/s: 26 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:23.490 [2024-05-15 12:28:08.054817] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.490 [2024-05-15 12:28:08.054837] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.490 [2024-05-15 12:28:08.054854] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.746 #27 NEW cov: 10948 ft: 16199 corp: 7/25b lim: 4 exec/s: 27 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:07:23.746 [2024-05-15 12:28:08.220594] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:23.746 [2024-05-15 12:28:08.220615] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:23.746 [2024-05-15 12:28:08.220632] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:23.746 #28 NEW cov: 10948 ft: 17226 corp: 8/29b lim: 4 exec/s: 28 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:24.003 [2024-05-15 12:28:08.387173] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:24.003 [2024-05-15 12:28:08.387196] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:24.003 [2024-05-15 12:28:08.387214] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:24.003 #29 NEW cov: 10948 ft: 17305 corp: 9/33b lim: 4 exec/s: 29 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:07:24.003 [2024-05-15 12:28:08.555081] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:24.003 [2024-05-15 12:28:08.555102] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:24.003 [2024-05-15 12:28:08.555119] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:24.259 #30 NEW cov: 10948 ft: 17407 corp: 10/37b lim: 4 exec/s: 30 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:24.259 [2024-05-15 12:28:08.722466] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:24.259 [2024-05-15 12:28:08.722488] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:24.259 [2024-05-15 12:28:08.722506] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:24.259 #31 NEW cov: 10955 ft: 17607 corp: 11/41b lim: 4 exec/s: 31 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:07:24.516 [2024-05-15 12:28:08.890438] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:24.516 [2024-05-15 12:28:08.890459] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:24.516 [2024-05-15 12:28:08.890476] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:24.516 #32 pulse cov: 10955 ft: 17753 corp: 11/41b lim: 4 exec/s: 16 rss: 73Mb 00:07:24.516 #32 NEW cov: 10955 ft: 17753 corp: 12/45b lim: 4 exec/s: 16 rss: 73Mb L: 4/4 MS: 1 ChangeASCIIInt- 00:07:24.516 #32 DONE cov: 10955 ft: 17753 corp: 12/45b lim: 4 exec/s: 16 rss: 73Mb 00:07:24.516 Done 32 runs in 2 second(s) 00:07:24.516 [2024-05-15 12:28:09.008570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:24.516 [2024-05-15 12:28:09.058666] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:24.774 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:24.774 12:28:09 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:24.774 [2024-05-15 12:28:09.298451] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:24.774 [2024-05-15 12:28:09.298526] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412405 ] 00:07:24.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.774 [2024-05-15 12:28:09.369922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.032 [2024-05-15 12:28:09.444225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.032 [2024-05-15 12:28:09.619745] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:25.032 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.032 INFO: Seed: 4018855372 00:07:25.290 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:25.290 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:25.290 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:25.290 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.290 #2 INITED exec/s: 0 rss: 65Mb 00:07:25.290 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.290 This may also happen if the target rejected all inputs we tried so far 00:07:25.290 [2024-05-15 12:28:09.694371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:25.290 [2024-05-15 12:28:09.731026] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.548 NEW_FUNC[1/647]: 0x4826c0 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:25.548 NEW_FUNC[2/647]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:25.548 #56 NEW cov: 10892 ft: 10721 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 4 ChangeByte-InsertRepeatedBytes-ShuffleBytes-CopyPart- 00:07:25.806 [2024-05-15 12:28:10.214197] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:25.806 #57 NEW cov: 10914 ft: 13248 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CopyPart- 00:07:25.806 [2024-05-15 12:28:10.390723] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.064 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:26.064 #58 NEW cov: 10931 ft: 14953 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:26.064 [2024-05-15 12:28:10.564714] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.064 #64 NEW cov: 10931 ft: 15194 corp: 5/33b lim: 8 exec/s: 64 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:07:26.322 [2024-05-15 12:28:10.742639] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.322 #73 NEW cov: 10931 ft: 15791 corp: 6/41b lim: 8 exec/s: 73 rss: 73Mb L: 8/8 MS: 4 EraseBytes-ChangeBit-ChangeByte-CrossOver- 00:07:26.322 [2024-05-15 12:28:10.915352] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.580 #74 NEW cov: 10931 ft: 16343 corp: 7/49b lim: 8 exec/s: 74 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:26.580 [2024-05-15 12:28:11.089285] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.580 #80 NEW cov: 10931 ft: 16696 corp: 8/57b lim: 8 exec/s: 80 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:26.837 [2024-05-15 12:28:11.262644] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:26.837 #86 NEW cov: 10931 ft: 16915 corp: 9/65b lim: 8 exec/s: 86 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:07:26.837 [2024-05-15 12:28:11.435015] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:27.095 #87 NEW cov: 10938 ft: 17372 corp: 10/73b lim: 8 exec/s: 87 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:27.096 [2024-05-15 12:28:11.610210] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:27.354 #88 NEW cov: 10938 ft: 17408 corp: 11/81b lim: 8 exec/s: 44 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:27.354 #88 DONE cov: 10938 ft: 17408 corp: 11/81b lim: 8 exec/s: 44 rss: 73Mb 00:07:27.354 Done 88 runs in 2 second(s) 00:07:27.354 [2024-05-15 12:28:11.732605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:27.354 [2024-05-15 12:28:11.785206] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:27.613 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:27.613 12:28:11 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:27.613 [2024-05-15 12:28:12.028159] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:27.613 [2024-05-15 12:28:12.028225] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412809 ] 00:07:27.613 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.613 [2024-05-15 12:28:12.101732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.613 [2024-05-15 12:28:12.174393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.871 [2024-05-15 12:28:12.349211] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:27.871 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.871 INFO: Seed: 2452894397 00:07:27.871 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:27.871 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:27.871 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:27.871 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.871 #2 INITED exec/s: 0 rss: 65Mb 00:07:27.871 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.871 This may also happen if the target rejected all inputs we tried so far 00:07:27.871 [2024-05-15 12:28:12.419397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:28.129 [2024-05-15 12:28:12.489526] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=323 offset=0xa00000000000000 prot=0x3: Invalid argument 00:07:28.129 [2024-05-15 12:28:12.489551] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:07:28.129 [2024-05-15 12:28:12.489562] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:28.129 [2024-05-15 12:28:12.489580] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:28.386 NEW_FUNC[1/648]: 0x482da0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:28.386 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:28.386 #143 NEW cov: 10912 ft: 10824 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:28.386 [2024-05-15 12:28:12.982636] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 2531906049332683555 > max 8796093022208 00:07:28.386 [2024-05-15 12:28:12.982672] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x2323230000000000, 0x4646462323232323) offset=0xa00000000007e23 flags=0x3: No space left on device 00:07:28.386 [2024-05-15 12:28:12.982683] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:28.386 [2024-05-15 12:28:12.982716] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:28.651 #146 NEW cov: 10929 ft: 14253 corp: 3/65b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 3 InsertRepeatedBytes-InsertByte-InsertRepeatedBytes- 00:07:28.651 [2024-05-15 12:28:13.183493] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0xa00000000c40000 prot=0x3: Invalid argument 00:07:28.651 [2024-05-15 12:28:13.183517] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000c40000 flags=0x3: Invalid argument 00:07:28.651 [2024-05-15 12:28:13.183528] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:28.651 [2024-05-15 12:28:13.183544] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:28.912 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:28.912 #147 NEW cov: 10946 ft: 15810 corp: 4/97b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:07:28.912 #148 NEW cov: 10950 ft: 16376 corp: 5/129b lim: 32 exec/s: 148 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:07:29.212 [2024-05-15 12:28:13.567803] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0xa00000000000000 prot=0x3: Invalid argument 00:07:29.213 [2024-05-15 12:28:13.567827] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:07:29.213 [2024-05-15 12:28:13.567837] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:29.213 [2024-05-15 12:28:13.567854] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:29.213 #154 NEW cov: 10950 ft: 16887 corp: 6/161b lim: 32 exec/s: 154 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:29.213 [2024-05-15 12:28:13.756873] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0xff0a000000000000 prot=0x3: Invalid argument 00:07:29.213 [2024-05-15 12:28:13.756896] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xff0a000000000000 flags=0x3: Invalid argument 00:07:29.213 [2024-05-15 12:28:13.756907] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:29.213 [2024-05-15 12:28:13.756940] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:29.486 #156 NEW cov: 10950 ft: 17186 corp: 7/193b lim: 32 exec/s: 156 rss: 73Mb L: 32/32 MS: 2 EraseBytes-InsertByte- 00:07:29.486 [2024-05-15 12:28:13.939498] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0xa00000000000000 prot=0x3: Invalid argument 00:07:29.486 [2024-05-15 12:28:13.939521] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:07:29.486 [2024-05-15 12:28:13.939532] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:29.486 [2024-05-15 12:28:13.939548] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:29.486 #157 NEW cov: 10950 ft: 17294 corp: 8/225b lim: 32 exec/s: 157 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:29.744 [2024-05-15 12:28:14.122233] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 2531906049332683750 > max 8796093022208 00:07:29.744 [2024-05-15 12:28:14.122257] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x2323230000000000, 0x46464623232323e6) offset=0xa00000000007e23 flags=0x3: No space left on device 00:07:29.744 [2024-05-15 12:28:14.122268] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:29.744 [2024-05-15 12:28:14.122285] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:29.744 #158 NEW cov: 10957 ft: 17598 corp: 9/257b lim: 32 exec/s: 158 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:29.744 [2024-05-15 12:28:14.302082] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 2531906049332687078 > max 8796093022208 00:07:29.744 [2024-05-15 12:28:14.302105] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x2323230000000000, 0x46464623232330e6) offset=0xa00000000007e23 flags=0x3: No space left on device 00:07:29.744 [2024-05-15 12:28:14.302116] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:29.744 [2024-05-15 12:28:14.302133] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:30.002 #169 NEW cov: 10957 ft: 17996 corp: 10/289b lim: 32 exec/s: 84 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:30.002 #169 DONE cov: 10957 ft: 17996 corp: 10/289b lim: 32 exec/s: 84 rss: 73Mb 00:07:30.002 Done 169 runs in 2 second(s) 00:07:30.002 [2024-05-15 12:28:14.430578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:30.002 [2024-05-15 12:28:14.480916] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:30.261 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:30.261 12:28:14 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:30.261 [2024-05-15 12:28:14.720630] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:30.261 [2024-05-15 12:28:14.720704] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413342 ] 00:07:30.261 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.261 [2024-05-15 12:28:14.791944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.261 [2024-05-15 12:28:14.864454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.519 [2024-05-15 12:28:15.032563] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:30.519 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.519 INFO: Seed: 841905412 00:07:30.519 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:30.519 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:30.519 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:30.519 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.519 #2 INITED exec/s: 0 rss: 64Mb 00:07:30.519 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.519 This may also happen if the target rejected all inputs we tried so far 00:07:30.519 [2024-05-15 12:28:15.111618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:30.777 [2024-05-15 12:28:15.183713] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xffffff3800000000, 0xffffff380000feff) fd=323 offset=0xa00000000000000 prot=0x3: Permission denied 00:07:30.777 [2024-05-15 12:28:15.183738] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff3800000000, 0xffffff380000feff) offset=0xa00000000000000 flags=0x3: Permission denied 00:07:30.777 [2024-05-15 12:28:15.183749] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:30.777 [2024-05-15 12:28:15.183768] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:30.777 [2024-05-15 12:28:15.184735] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff3800000000, 0xffffff380000feff) flags=0: No such file or directory 00:07:30.777 [2024-05-15 12:28:15.184755] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:30.777 [2024-05-15 12:28:15.184772] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:31.036 NEW_FUNC[1/648]: 0x483620 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:31.036 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:31.036 #86 NEW cov: 10915 ft: 10745 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 4 InsertRepeatedBytes-InsertByte-ChangeBinInt-InsertByte- 00:07:31.294 [2024-05-15 12:28:15.667660] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xffffff3700000000, 0xffffff370000feff) fd=325 offset=0xa00000000000000 prot=0x3: Permission denied 00:07:31.294 [2024-05-15 12:28:15.667692] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff3700000000, 0xffffff370000feff) offset=0xa00000000000000 flags=0x3: Permission denied 00:07:31.294 [2024-05-15 12:28:15.667703] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:31.294 [2024-05-15 12:28:15.667720] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:31.294 [2024-05-15 12:28:15.668678] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff3700000000, 0xffffff370000feff) flags=0: No such file or directory 00:07:31.294 [2024-05-15 12:28:15.668701] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:31.294 [2024-05-15 12:28:15.668717] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:31.294 #92 NEW cov: 10933 ft: 14110 corp: 3/65b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeASCIIInt- 00:07:31.294 [2024-05-15 12:28:15.850196] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x4000000000, 0x4000370000) fd=325 offset=0xa00000000000000 prot=0x3: Permission denied 00:07:31.294 [2024-05-15 12:28:15.850220] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x4000000000, 0x4000370000) offset=0xa00000000000000 flags=0x3: Permission denied 00:07:31.294 [2024-05-15 12:28:15.850230] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:31.294 [2024-05-15 12:28:15.850247] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:31.294 [2024-05-15 12:28:15.851222] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x4000000000, 0x4000370000) flags=0: No such file or directory 00:07:31.294 [2024-05-15 12:28:15.851241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:31.294 [2024-05-15 12:28:15.851256] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:31.552 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:31.552 #113 NEW cov: 10950 ft: 15347 corp: 4/97b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:31.552 [2024-05-15 12:28:16.046801] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xffffff3800000000, 0xffffff380000feff) fd=325 offset=0xa00000000000010 prot=0x3: Permission denied 00:07:31.552 [2024-05-15 12:28:16.046823] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff3800000000, 0xffffff380000feff) offset=0xa00000000000010 flags=0x3: Permission denied 00:07:31.552 [2024-05-15 12:28:16.046834] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:31.552 [2024-05-15 12:28:16.046850] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:31.552 [2024-05-15 12:28:16.047812] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff3800000000, 0xffffff380000feff) flags=0: No such file or directory 00:07:31.552 [2024-05-15 12:28:16.047830] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:31.552 [2024-05-15 12:28:16.047846] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:31.552 #114 NEW cov: 10950 ft: 16867 corp: 5/129b lim: 32 exec/s: 114 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:07:31.810 [2024-05-15 12:28:16.243641] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xffffff3800000000, 0xffffff380000feff) fd=325 offset=0xa00000003000000 prot=0x3: Permission denied 00:07:31.810 [2024-05-15 12:28:16.243664] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff3800000000, 0xffffff380000feff) offset=0xa00000003000000 flags=0x3: Permission denied 00:07:31.810 [2024-05-15 12:28:16.243674] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:31.810 [2024-05-15 12:28:16.243690] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:31.810 [2024-05-15 12:28:16.244641] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff3800000000, 0xffffff380000feff) flags=0: No such file or directory 00:07:31.810 [2024-05-15 12:28:16.244659] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:31.810 [2024-05-15 12:28:16.244675] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:31.810 #115 NEW cov: 10950 ft: 17027 corp: 6/161b lim: 32 exec/s: 115 rss: 72Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:31.810 [2024-05-15 12:28:16.426970] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xffffff3800000000, 0xffffff380000fffe) fd=325 offset=0xa00000000000010 prot=0x3: Permission denied 00:07:31.810 [2024-05-15 12:28:16.426992] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff3800000000, 0xffffff380000fffe) offset=0xa00000000000010 flags=0x3: Permission denied 00:07:31.810 [2024-05-15 12:28:16.427003] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:31.810 [2024-05-15 12:28:16.427020] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:32.069 [2024-05-15 12:28:16.427978] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff3800000000, 0xffffff380000fffe) flags=0: No such file or directory 00:07:32.069 [2024-05-15 12:28:16.427997] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:32.069 [2024-05-15 12:28:16.428013] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:32.069 #121 NEW cov: 10950 ft: 17284 corp: 7/193b lim: 32 exec/s: 121 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:32.069 [2024-05-15 12:28:16.610565] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4503599627435775 > max 8796093022208 00:07:32.069 [2024-05-15 12:28:16.610587] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff3800000000, 0xfff380000feff) offset=0xa00000000000000 flags=0x3: No space left on device 00:07:32.069 [2024-05-15 12:28:16.610598] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:07:32.069 [2024-05-15 12:28:16.610614] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:32.069 [2024-05-15 12:28:16.611601] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff3800000000, 0xfff380000feff) flags=0: No such file or directory 00:07:32.069 [2024-05-15 12:28:16.611619] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:32.069 [2024-05-15 12:28:16.611635] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:32.328 #127 NEW cov: 10950 ft: 17440 corp: 8/225b lim: 32 exec/s: 127 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:32.328 [2024-05-15 12:28:16.789209] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xb1ffff3800000000, 0xb1ffff380000fffe) fd=325 offset=0xa00000000000010 prot=0x3: Permission denied 00:07:32.328 [2024-05-15 12:28:16.789231] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xb1ffff3800000000, 0xb1ffff380000fffe) offset=0xa00000000000010 flags=0x3: Permission denied 00:07:32.328 [2024-05-15 12:28:16.789242] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:07:32.328 [2024-05-15 12:28:16.789259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:32.328 [2024-05-15 12:28:16.790226] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xb1ffff3800000000, 0xb1ffff380000fffe) flags=0: No such file or directory 00:07:32.328 [2024-05-15 12:28:16.790245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:32.328 [2024-05-15 12:28:16.790261] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:32.328 #128 NEW cov: 10957 ft: 17591 corp: 9/257b lim: 32 exec/s: 128 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:07:32.586 #129 NEW cov: 10957 ft: 17934 corp: 10/289b lim: 32 exec/s: 64 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:07:32.586 #129 DONE cov: 10957 ft: 17934 corp: 10/289b lim: 32 exec/s: 64 rss: 72Mb 00:07:32.586 Done 129 runs in 2 second(s) 00:07:32.586 [2024-05-15 12:28:17.105580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:32.586 [2024-05-15 12:28:17.155448] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:32.845 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.845 12:28:17 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:32.845 [2024-05-15 12:28:17.394543] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:32.845 [2024-05-15 12:28:17.394629] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413879 ] 00:07:32.845 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.103 [2024-05-15 12:28:17.468235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.103 [2024-05-15 12:28:17.540085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.103 [2024-05-15 12:28:17.706320] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:33.361 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.361 INFO: Seed: 3515906937 00:07:33.361 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:33.361 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:33.361 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:33.361 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.361 #2 INITED exec/s: 0 rss: 65Mb 00:07:33.361 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.361 This may also happen if the target rejected all inputs we tried so far 00:07:33.361 [2024-05-15 12:28:17.774239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:33.361 [2024-05-15 12:28:17.823460] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.361 [2024-05-15 12:28:17.823499] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.619 NEW_FUNC[1/648]: 0x484020 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:33.619 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.619 #74 NEW cov: 10919 ft: 10863 corp: 2/14b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:33.877 [2024-05-15 12:28:18.299015] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.877 [2024-05-15 12:28:18.299058] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.877 #75 NEW cov: 10933 ft: 13856 corp: 3/27b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 CMP- DE: "k\000\000\000\000\000\000\000"- 00:07:33.877 [2024-05-15 12:28:18.481446] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.877 [2024-05-15 12:28:18.481476] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.134 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:34.134 #76 NEW cov: 10950 ft: 15431 corp: 4/40b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeBit- 00:07:34.134 [2024-05-15 12:28:18.662697] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.134 [2024-05-15 12:28:18.662728] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.391 #77 NEW cov: 10950 ft: 16609 corp: 5/53b lim: 13 exec/s: 77 rss: 72Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:34.391 [2024-05-15 12:28:18.845688] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.391 [2024-05-15 12:28:18.845718] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.391 #78 NEW cov: 10950 ft: 16929 corp: 6/66b lim: 13 exec/s: 78 rss: 72Mb L: 13/13 MS: 1 CrossOver- 00:07:34.648 [2024-05-15 12:28:19.029563] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.648 [2024-05-15 12:28:19.029593] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.648 #89 NEW cov: 10950 ft: 17193 corp: 7/79b lim: 13 exec/s: 89 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:34.648 [2024-05-15 12:28:19.211693] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.648 [2024-05-15 12:28:19.211722] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.905 #90 NEW cov: 10950 ft: 17331 corp: 8/92b lim: 13 exec/s: 90 rss: 73Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:34.905 [2024-05-15 12:28:19.392633] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.905 [2024-05-15 12:28:19.392661] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.905 #91 NEW cov: 10950 ft: 17543 corp: 9/105b lim: 13 exec/s: 91 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:07:35.162 [2024-05-15 12:28:19.575931] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:35.162 [2024-05-15 12:28:19.575961] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:35.162 #92 NEW cov: 10957 ft: 17598 corp: 10/118b lim: 13 exec/s: 92 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:07:35.162 [2024-05-15 12:28:19.758871] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:35.162 [2024-05-15 12:28:19.758901] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:35.419 #93 NEW cov: 10957 ft: 17885 corp: 11/131b lim: 13 exec/s: 46 rss: 73Mb L: 13/13 MS: 1 CrossOver- 00:07:35.419 #93 DONE cov: 10957 ft: 17885 corp: 11/131b lim: 13 exec/s: 46 rss: 73Mb 00:07:35.419 ###### Recommended dictionary. ###### 00:07:35.419 "k\000\000\000\000\000\000\000" # Uses: 0 00:07:35.419 ###### End of recommended dictionary. ###### 00:07:35.419 Done 93 runs in 2 second(s) 00:07:35.419 [2024-05-15 12:28:19.890579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:35.419 [2024-05-15 12:28:19.940261] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:35.676 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.676 12:28:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:35.676 [2024-05-15 12:28:20.168992] Starting SPDK v24.05-pre git sha1 95a28e501 / DPDK 23.11.0 initialization... 00:07:35.676 [2024-05-15 12:28:20.169048] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414338 ] 00:07:35.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.676 [2024-05-15 12:28:20.240997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.932 [2024-05-15 12:28:20.316117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.932 [2024-05-15 12:28:20.482599] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:35.932 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.932 INFO: Seed: 1994943704 00:07:35.932 INFO: Loaded 1 modules (350880 inline 8-bit counters): 350880 [0x28f1c8c, 0x294772c), 00:07:35.932 INFO: Loaded 1 PC tables (350880 PCs): 350880 [0x2947730,0x2ea2130), 00:07:35.932 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:35.932 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.932 #2 INITED exec/s: 0 rss: 64Mb 00:07:35.932 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.932 This may also happen if the target rejected all inputs we tried so far 00:07:36.189 [2024-05-15 12:28:20.552390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:36.189 [2024-05-15 12:28:20.601421] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.189 [2024-05-15 12:28:20.601451] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.445 NEW_FUNC[1/648]: 0x484d10 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:36.445 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.445 #23 NEW cov: 10907 ft: 10662 corp: 2/10b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:36.703 [2024-05-15 12:28:21.077482] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.703 [2024-05-15 12:28:21.077522] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.703 #29 NEW cov: 10921 ft: 13668 corp: 3/19b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:36.703 [2024-05-15 12:28:21.264518] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.703 [2024-05-15 12:28:21.264547] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.960 NEW_FUNC[1/1]: 0x19f6280 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:36.960 #30 NEW cov: 10938 ft: 14354 corp: 4/28b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:36.960 [2024-05-15 12:28:21.448666] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:36.960 [2024-05-15 12:28:21.448697] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:36.960 #36 NEW cov: 10938 ft: 14721 corp: 5/37b lim: 9 exec/s: 36 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:07:37.217 [2024-05-15 12:28:21.633369] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.217 [2024-05-15 12:28:21.633406] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.217 #37 NEW cov: 10938 ft: 15677 corp: 6/46b lim: 9 exec/s: 37 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:37.217 [2024-05-15 12:28:21.816758] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.217 [2024-05-15 12:28:21.816788] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.474 #38 NEW cov: 10938 ft: 16142 corp: 7/55b lim: 9 exec/s: 38 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:37.474 [2024-05-15 12:28:22.002843] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.474 [2024-05-15 12:28:22.002872] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.730 #39 NEW cov: 10938 ft: 16177 corp: 8/64b lim: 9 exec/s: 39 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:37.730 [2024-05-15 12:28:22.187817] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.730 [2024-05-15 12:28:22.187846] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.730 #40 NEW cov: 10938 ft: 16221 corp: 9/73b lim: 9 exec/s: 40 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:37.986 [2024-05-15 12:28:22.374071] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.986 [2024-05-15 12:28:22.374101] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:37.986 #41 NEW cov: 10945 ft: 17741 corp: 10/82b lim: 9 exec/s: 41 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:37.986 [2024-05-15 12:28:22.568461] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:37.986 [2024-05-15 12:28:22.568490] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:38.243 #42 NEW cov: 10945 ft: 17964 corp: 11/91b lim: 9 exec/s: 21 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:38.243 #42 DONE cov: 10945 ft: 17964 corp: 11/91b lim: 9 exec/s: 21 rss: 73Mb 00:07:38.243 Done 42 runs in 2 second(s) 00:07:38.243 [2024-05-15 12:28:22.702569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:38.243 [2024-05-15 12:28:22.752423] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:38.500 12:28:22 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:38.500 12:28:22 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.500 12:28:22 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.500 12:28:22 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:38.500 00:07:38.500 real 0m19.537s 00:07:38.500 user 0m27.411s 00:07:38.500 sys 0m1.788s 00:07:38.500 12:28:22 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:38.500 12:28:22 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:38.500 ************************************ 00:07:38.500 END TEST vfio_fuzz 00:07:38.500 ************************************ 00:07:38.500 12:28:22 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:38.500 00:07:38.500 real 1m23.751s 00:07:38.500 user 2m7.686s 00:07:38.500 sys 0m8.989s 00:07:38.500 12:28:22 llvm_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:38.500 12:28:22 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:38.500 ************************************ 00:07:38.500 END TEST llvm_fuzz 00:07:38.500 ************************************ 00:07:38.500 12:28:23 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:07:38.500 12:28:23 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:07:38.500 12:28:23 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:07:38.500 12:28:23 -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:38.500 12:28:23 -- common/autotest_common.sh@10 -- # set +x 00:07:38.500 12:28:23 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:07:38.500 12:28:23 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:07:38.500 12:28:23 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:07:38.500 12:28:23 -- common/autotest_common.sh@10 -- # set +x 00:07:45.053 INFO: APP EXITING 00:07:45.053 INFO: killing all VMs 00:07:45.053 INFO: killing vhost app 00:07:45.053 INFO: EXIT DONE 00:07:48.330 Waiting for block devices as requested 00:07:48.330 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:48.330 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:48.330 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:48.330 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:48.330 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:48.330 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:48.589 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:48.589 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:48.589 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:48.847 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:48.847 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:48.847 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:49.104 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:49.104 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:49.104 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:49.104 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:49.360 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:52.638 Cleaning 00:07:52.638 Removing: /dev/shm/spdk_tgt_trace.pid2378451 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2375999 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2377251 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2378451 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2379159 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2380000 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2380278 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2381385 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2381408 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2381810 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2382131 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2382452 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2382790 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2383115 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2383409 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2383690 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2383998 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2384867 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2388021 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2388324 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2388715 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2388880 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2389455 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2389678 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2390200 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2390303 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2390598 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2390865 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2391027 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2391173 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2391708 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2391892 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2392117 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2392437 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2392737 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2392770 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2392930 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2393160 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2393406 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2393694 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2393975 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2394264 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2394552 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2394833 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2395118 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2395399 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2395684 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2395969 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2396214 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2396448 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2396685 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2396902 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2397153 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2397436 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2397728 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2398007 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2398292 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2398467 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2398856 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2399432 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2400093 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2400828 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2401346 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2401875 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2402295 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2402702 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2403231 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2403615 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2404054 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2404591 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2404960 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2405410 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2405939 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2406263 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2406764 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2407299 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2407600 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2408124 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2408652 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2409001 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2409473 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2410013 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2410339 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2410836 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2411445 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2411977 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2412405 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2412809 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2413342 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2413879 00:07:52.638 Removing: /var/run/dpdk/spdk_pid2414338 00:07:52.638 Clean 00:07:52.638 12:28:37 -- common/autotest_common.sh@1448 -- # return 0 00:07:52.638 12:28:37 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:07:52.638 12:28:37 -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:52.638 12:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 12:28:37 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:07:52.639 12:28:37 -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:52.639 12:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 12:28:37 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:52.639 12:28:37 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:52.639 12:28:37 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:52.639 12:28:37 -- spdk/autotest.sh@387 -- # hash lcov 00:07:52.639 12:28:37 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:52.639 12:28:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:52.639 12:28:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:52.639 12:28:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.639 12:28:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.639 12:28:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.639 12:28:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.639 12:28:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.639 12:28:37 -- paths/export.sh@5 -- $ export PATH 00:07:52.639 12:28:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.639 12:28:37 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:52.897 12:28:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:52.897 12:28:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715768917.XXXXXX 00:07:52.897 12:28:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715768917.MF6P7W 00:07:52.897 12:28:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:52.897 12:28:37 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:07:52.897 12:28:37 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:52.897 12:28:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:52.897 12:28:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:52.897 12:28:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:52.897 12:28:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:07:52.897 12:28:37 -- common/autotest_common.sh@10 -- $ set +x 00:07:52.897 12:28:37 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:52.897 12:28:37 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:52.897 12:28:37 -- pm/common@17 -- $ local monitor 00:07:52.897 12:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.897 12:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.897 12:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.897 12:28:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.897 12:28:37 -- pm/common@25 -- $ sleep 1 00:07:52.897 12:28:37 -- pm/common@21 -- $ date +%s 00:07:52.897 12:28:37 -- pm/common@21 -- $ date +%s 00:07:52.897 12:28:37 -- pm/common@21 -- $ date +%s 00:07:52.897 12:28:37 -- pm/common@21 -- $ date +%s 00:07:52.897 12:28:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715768917 00:07:52.897 12:28:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715768917 00:07:52.897 12:28:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715768917 00:07:52.897 12:28:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715768917 00:07:52.897 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715768917_collect-vmstat.pm.log 00:07:52.897 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715768917_collect-cpu-load.pm.log 00:07:52.897 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715768917_collect-cpu-temp.pm.log 00:07:52.897 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715768917_collect-bmc-pm.bmc.pm.log 00:07:53.831 12:28:38 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:53.831 12:28:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:07:53.831 12:28:38 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:53.831 12:28:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:53.831 12:28:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:53.831 12:28:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:53.831 12:28:38 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:53.831 12:28:38 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:53.831 12:28:38 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:53.831 12:28:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:53.831 12:28:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:53.831 12:28:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:53.831 12:28:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:53.831 12:28:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.831 12:28:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:53.831 12:28:38 -- pm/common@44 -- $ pid=2421423 00:07:53.831 12:28:38 -- pm/common@50 -- $ kill -TERM 2421423 00:07:53.831 12:28:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.831 12:28:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:53.831 12:28:38 -- pm/common@44 -- $ pid=2421424 00:07:53.831 12:28:38 -- pm/common@50 -- $ kill -TERM 2421424 00:07:53.831 12:28:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.831 12:28:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:53.831 12:28:38 -- pm/common@44 -- $ pid=2421425 00:07:53.831 12:28:38 -- pm/common@50 -- $ kill -TERM 2421425 00:07:53.831 12:28:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:53.831 12:28:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:53.831 12:28:38 -- pm/common@44 -- $ pid=2421471 00:07:53.831 12:28:38 -- pm/common@50 -- $ sudo -E kill -TERM 2421471 00:07:53.831 + [[ -n 2270116 ]] 00:07:53.831 + sudo kill 2270116 00:07:53.841 [Pipeline] } 00:07:53.859 [Pipeline] // stage 00:07:53.866 [Pipeline] } 00:07:53.884 [Pipeline] // timeout 00:07:53.890 [Pipeline] } 00:07:53.907 [Pipeline] // catchError 00:07:53.912 [Pipeline] } 00:07:53.930 [Pipeline] // wrap 00:07:53.936 [Pipeline] } 00:07:53.951 [Pipeline] // catchError 00:07:53.960 [Pipeline] stage 00:07:53.962 [Pipeline] { (Epilogue) 00:07:53.977 [Pipeline] catchError 00:07:53.979 [Pipeline] { 00:07:53.993 [Pipeline] echo 00:07:53.994 Cleanup processes 00:07:54.000 [Pipeline] sh 00:07:54.280 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:54.280 2331625 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715768576 00:07:54.280 2331666 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715768576 00:07:54.280 2421576 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:54.280 2422378 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:54.294 [Pipeline] sh 00:07:54.574 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:54.574 ++ grep -v 'sudo pgrep' 00:07:54.574 ++ awk '{print $1}' 00:07:54.574 + sudo kill -9 2331625 2331666 2421576 00:07:54.628 [Pipeline] sh 00:07:54.935 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:54.935 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:54.935 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:56.313 [Pipeline] sh 00:07:56.591 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:56.591 Artifacts sizes are good 00:07:56.605 [Pipeline] archiveArtifacts 00:07:56.611 Archiving artifacts 00:07:56.648 [Pipeline] sh 00:07:56.926 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:56.940 [Pipeline] cleanWs 00:07:56.948 [WS-CLEANUP] Deleting project workspace... 00:07:56.948 [WS-CLEANUP] Deferred wipeout is used... 00:07:56.954 [WS-CLEANUP] done 00:07:56.956 [Pipeline] } 00:07:56.975 [Pipeline] // catchError 00:07:56.986 [Pipeline] sh 00:07:57.263 + logger -p user.info -t JENKINS-CI 00:07:57.272 [Pipeline] } 00:07:57.290 [Pipeline] // stage 00:07:57.295 [Pipeline] } 00:07:57.312 [Pipeline] // node 00:07:57.318 [Pipeline] End of Pipeline 00:07:57.349 Finished: SUCCESS